________________________________________________________________________________
Hmm, IAMNAM, but it looks like what happened is that the model automatically learned some nontrivial structure to a mathematical problem.
This is pretty f*&king exciting if you ask me.
This isn't the first time! Physicists have been doing this (to math problems) for a few years.
For example:
1)
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.96.06...
2)
https://www.sciencedirect.com/science/article/pii/S037026932...
3)
https://onlinelibrary.wiley.com/doi/full/10.1002/prop.202000...
?
An expository article:
https://arxiv.org/abs/2101.06317
>the model automatically learned some nontrivial structure
what DL nets do inside is efficient (in many cases optimal) encoding which seems to be the same thing as analytical reduction to patterns what we do in our brain. The power of DL nets is only limited by hardware - ie. not limited at all going forward, and thus i think we, humans, will soon be left in the dust behind.
> i think we, humans, will soon be left in the dust behind
Plot twist – climate change deniers are bots of the future Skynet
What does IAMNAM stand for?
My guess would be I AM Not A Mathematician
Accounting for imposter syndrome, I Am Maybe Not A Mathematician
Very charitable, thank you, but it was a mistake, apologies for the confusion.
Link to DeepMind blog:
https://deepmind.com/blog/article/exploring-the-beauty-of-pu...
(on the numberphile page).
Discussed (a bit) here:
_Exploring the beauty of pure mathematics in novel ways_ -
https://news.ycombinator.com/item?id=29405380
- Dec 2021 (4 comments)
People might prefer TheConversation link. From the Numberphile page.
https://theconversation.com/mathematical-discoveries-take-in...
Written by Geordie Williamson. Trivia: youngest ever member of the Royal Society.
Ok, I've changed the URL to that from
https://www.numberphile.com/videos/deep-mind-podcast
, which doesn't have much to read. The Deepmind blog post was submitted and discussed (a little bit) here:
_Exploring the beauty of pure mathematics in novel ways_ -
https://news.ycombinator.com/item?id=29405380
- Dec 2021 (4 comments)
That trivia is wrong! Reference:
https://royalsociety.org/blog/2020/05/young-guns/
.
Yep. He is the youngest living member, though.
No, as the link states, Jack Thorne is the youngest living member.
I feel an interesting watch for anyone in math is the "Closer to Truth" series, especially the ones focusing on certain topics.
Particularly the ones on "Is Mathematics Invented or Discovered?".
https://www.youtube.com/watch?v=mlzygqQnAnA
Here is a relevant repository:
https://github.com/deepmind/mathematics_conjectures
Extremely excited about this. I am wildly creative and talented from a mathematical standpoint. It's probably the best language I speak along with being the one that I am most proficient in. Mathematics basically comes to me as common sense.
The only thing that might throw me off is notation, which I have developed cheat sheets for. Using the cheat sheets, I verbalize the expression as plain English syntax in sequential order, linearizing it. This always clears up the confusion.
It looks like the basic meta-algorithm is:
1) Notice a pattern in the mathematics
2) Device a neural network that leverages structural properties of the space you wish to investigate
3) Encode relationship in a manner allowing use of supervised learning and see if the net can learn a pattern.
4) If it failed, rethink pattern and go back to step 2 or give up; else use attribution and explainability tools to try and extract human understandable concepts. Go back to step 2 until human converges
5) Use the extracted concepts to generate a conjecture or aid a proof.
6) Profit
In a way it's like AlphaZero in that a nnet is helping the searcher prune a highly branching decision space.
Deepmind recently published another paper [for chess](
https://arxiv.org/abs/2111.09259
) which again was about extracting hidden knowledge from a neural network. Authors hope this can be an avenue through which machine learning can be useful to mathematics research. But even in the cases where their meta-algorithm works, I think they underestimate the difficulty of having expert neural network practitioners on hand, without which I expect step 3 failures will be massively inflated.
----------
Choice extracts:
In each of these cases, the necessary models can be trained within several hours on a machine with a single graphics processing unit
Further, in some domains the functions of interest may be difficult to learn in this paradigm. However, we believe there are many areas that could benefit from our methodology.
The Bruhat interval of a pair of permutations is a partially ordered set of the elements of the group, and it can be represented as a directed acyclic graph. For modelling the Bruhat intervals, we use a message-passing neural network. We add two features at each node representing the in-degree and out-degree of that node.
First, to gain confidence that the conjecture is correct, we trained a model to predict coefficients of the KL polynomial from the unlabelled Bruhat interval. We were able to do so across the different coefficients with reasonable accuracy giving some evidence that a general function may exist, as a four-step MPNN is a relatively simple function class. We trained a GraphNet model on the basis of a newly hypothesized representation and could achieve significantly better performance, lending evidence that it is a sufficient and helpful representation to understand the KL polynomial. To understand how the predictions were being made by the learned function f^, we used gradient-based attribution to define a salient subgraph SG for each example interval G
It understands nothing. It's like a blind beaver building dams by instinct and directing water. Deep learning lol.
I’d like to see something similar with quantum computing. How might AI make it easier to play around in the space to uncover functional relationships?
Is there an arxiv link to the paper the author wrote about the Kazhdan-Lusztig polynomials?
https://arxiv.org/abs/2111.15161
Not in favor of the headline. The deepmind blog clearly states
"Our results suggest that ML can complement maths research to guide intuition about a problem by detecting the existence of hypothesised patterns with supervised learning and giving insight into these patterns with attribution techniques from machine learning"
The submitted headline was "Podcast: Google's 'DeepMind' Does Mathematics" because the submitted URL was
https://www.numberphile.com/videos/deep-mind-podcast
.
Since the title of the article we changed it to is vague and baity, I replaced it with a representative sentence from the article body, i.e. where Williamson describes the purpose of the research.
Can anyone elaborate on what these attribution techniques might be? My understanding is that explainability is a bit of a holy grail for DNN research - are reliable techniques starting to emerge?
Math does itself?
Gross
Everyone bumps into the same thing over and over. Structure IS Function. Shouldn’t we talk about “What does that MEAN?”
I’m obsessed with science and my science feelers are _on fire_.
Here’s my rant/manifesto about the topic:
A Fatal Error, or
A Space+Time Trajectory Theory of Abstraction and “Stuff”
https://docs.google.com/document/d/1q8_34UzIaJXHtUy_K_sf2bHc...
Wasn’t Einstein just talking about Duck Typing? Seriously. Think about it. Do we live in a slideshow universe? Have we fully considered every consequence of not living in a slideshow universe?
This is bigger than math, people! Wake up!