💾 Archived View for dioskouroi.xyz › thread › 25008482 captured on 2020-11-07 at 00:55:10. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
________________________________________________________________________________
A very nice paper.
I'm quite happy to see more work on discrete generative models -- probabilistic programming languages are still wrestling (or simply ignoring!) the problem of "disintegration", where conditioning changes the base measure because it collapses the dimensionality of the probability manifold (similar to the issue Arjovsky identified with GANs). See e.g.
http://homes.sice.indiana.edu/ccshan/rational/disint2arg.pdf
and
https://probprog.cc/assets/posters/thu/78.pdf
. To me (although this is probably too radical a move to be palatable to most people) we should largely abandon continuous distributions and start building out discrete probability spaces and methods with the same vigor that continuous probability got in the form of measure theory. These probability trees some like a natural data structure to begin this. I'd also like to see representations for working with probabilities on discrete _manifolds_ that approximate continuous space in computationally efficient ways -- there could be work in this direction already but I'm not aware of it.
Also, a fun implication of these probability trees I'd like to see explored is structural sharing: you need not copy an entire tree to represent the result of a conditioning or intervention. In general you need to copy only a number of nodes given by the size lying above the cut set, in a similar way to how immutable data structures like HAMTs can represent modified hash maps etc with persistent space efficiency by reusing unchanged nodes. If one expects to condition often with certain variables, it would then be useful to hoist such cut sets as high as possible -- does such a 'transpose' operation exist? I admit I did not read the paper thoroughly enough to know if this was mentioned.
Looking through
https://www.hackernewspapers.com/
it was also listed here with a Colab, I wish HN allowed some way for followups to merge,
https://news.ycombinator.com/item?id=24938775
This was noted in a recent HN post titled "Understanding Causality Is the Next Challenge for Machine Learning":
https://news.ycombinator.com/item?id=24931923
The one reply about it in that thread said: "It got posted on HN a few days ago but I am surprised that it did not get more traction."
:)