💾 Archived View for dioskouroi.xyz › thread › 29424749 captured on 2021-12-05 at 23:47:19. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2021-12-04)

🚧 View Differences

-=-=-=-=-=-=-

Can deep learning help mathematicians build intuition?

Author: amichail

Score: 99

Comments: 28

Date: 2021-12-02 23:55:44

Web Link

________________________________________________________________________________

Gatsky wrote at 2021-12-03 03:44:43:

Hmm, IAMNAM, but it looks like what happened is that the model automatically learned some nontrivial structure to a mathematical problem.

This is pretty f*&king exciting if you ask me.

spekcular wrote at 2021-12-03 04:12:23:

This isn't the first time! Physicists have been doing this (to math problems) for a few years.

For example:

1)

https://journals.aps.org/prd/abstract/10.1103/PhysRevD.96.06...

2)

https://www.sciencedirect.com/science/article/pii/S037026932...

3)

https://onlinelibrary.wiley.com/doi/full/10.1002/prop.202000...

?

An expository article:

https://arxiv.org/abs/2101.06317

trhway wrote at 2021-12-03 08:36:45:

>the model automatically learned some nontrivial structure

what DL nets do inside is efficient (in many cases optimal) encoding which seems to be the same thing as analytical reduction to patterns what we do in our brain. The power of DL nets is only limited by hardware - ie. not limited at all going forward, and thus i think we, humans, will soon be left in the dust behind.

dandanua wrote at 2021-12-03 11:53:30:

> i think we, humans, will soon be left in the dust behind

Plot twist – climate change deniers are bots of the future Skynet

montalbano wrote at 2021-12-03 08:13:00:

What does IAMNAM stand for?

AndrewDucker wrote at 2021-12-03 08:19:59:

My guess would be I AM Not A Mathematician

EvanKnowles wrote at 2021-12-03 08:53:39:

Accounting for imposter syndrome, I Am Maybe Not A Mathematician

Gatsky wrote at 2021-12-03 12:05:21:

Very charitable, thank you, but it was a mistake, apologies for the confusion.

ducttapecrown wrote at 2021-12-03 00:48:15:

Link to DeepMind blog:

https://deepmind.com/blog/article/exploring-the-beauty-of-pu...

(on the numberphile page).

dang wrote at 2021-12-03 01:48:55:

Discussed (a bit) here:

_Exploring the beauty of pure mathematics in novel ways_ -

https://news.ycombinator.com/item?id=29405380

- Dec 2021 (4 comments)

emmelaich wrote at 2021-12-03 01:29:32:

People might prefer TheConversation link. From the Numberphile page.

https://theconversation.com/mathematical-discoveries-take-in...

Written by Geordie Williamson. Trivia: youngest ever member of the Royal Society.

dang wrote at 2021-12-03 01:47:33:

Ok, I've changed the URL to that from

https://www.numberphile.com/videos/deep-mind-podcast

, which doesn't have much to read. The Deepmind blog post was submitted and discussed (a little bit) here:

_Exploring the beauty of pure mathematics in novel ways_ -

https://news.ycombinator.com/item?id=29405380

- Dec 2021 (4 comments)

spekcular wrote at 2021-12-03 05:07:32:

That trivia is wrong! Reference:

https://royalsociety.org/blog/2020/05/young-guns/

.

onorton wrote at 2021-12-03 14:35:11:

Yep. He is the youngest living member, though.

spekcular wrote at 2021-12-03 16:11:35:

No, as the link states, Jack Thorne is the youngest living member.

EMM_386 wrote at 2021-12-03 02:29:34:

I feel an interesting watch for anyone in math is the "Closer to Truth" series, especially the ones focusing on certain topics.

Particularly the ones on "Is Mathematics Invented or Discovered?".

https://www.youtube.com/watch?v=mlzygqQnAnA

disabled wrote at 2021-12-03 06:42:08:

Here is a relevant repository:

https://github.com/deepmind/mathematics_conjectures

Extremely excited about this. I am wildly creative and talented from a mathematical standpoint. It's probably the best language I speak along with being the one that I am most proficient in. Mathematics basically comes to me as common sense.

The only thing that might throw me off is notation, which I have developed cheat sheets for. Using the cheat sheets, I verbalize the expression as plain English syntax in sequential order, linearizing it. This always clears up the confusion.

Vetch wrote at 2021-12-03 05:43:15:

It looks like the basic meta-algorithm is:

1) Notice a pattern in the mathematics

2) Device a neural network that leverages structural properties of the space you wish to investigate

3) Encode relationship in a manner allowing use of supervised learning and see if the net can learn a pattern.

4) If it failed, rethink pattern and go back to step 2 or give up; else use attribution and explainability tools to try and extract human understandable concepts. Go back to step 2 until human converges

5) Use the extracted concepts to generate a conjecture or aid a proof.

6) Profit

In a way it's like AlphaZero in that a nnet is helping the searcher prune a highly branching decision space.

Deepmind recently published another paper [for chess](

https://arxiv.org/abs/2111.09259

) which again was about extracting hidden knowledge from a neural network. Authors hope this can be an avenue through which machine learning can be useful to mathematics research. But even in the cases where their meta-algorithm works, I think they underestimate the difficulty of having expert neural network practitioners on hand, without which I expect step 3 failures will be massively inflated.

----------

Choice extracts:

In each of these cases, the necessary models can be trained within several hours on a machine with a single graphics processing unit
Further, in some domains the functions of interest may be difficult to learn in this paradigm. However, we believe there are many areas that could benefit from our methodology.
The Bruhat interval of a pair of permutations is a partially ordered set of the elements of the group, and it can be represented as a directed acyclic graph. For modelling the Bruhat intervals, we use a message-passing neural network. We add two features at each node representing the in-degree and out-degree of that node.
First, to gain confidence that the conjecture is correct, we trained a model to predict coefficients of the KL polynomial from the unlabelled Bruhat interval. We were able to do so across the different coefficients with reasonable accuracy giving some evidence that a general function may exist, as a four-step MPNN is a relatively simple function class. We trained a GraphNet model on the basis of a newly hypothesized representation and could achieve significantly better performance, lending evidence that it is a sufficient and helpful representation to understand the KL polynomial. To understand how the predictions were being made by the learned function f^, we used gradient-based attribution to define a salient subgraph SG for each example interval G

cruelty wrote at 2021-12-04 00:14:11:

It understands nothing. It's like a blind beaver building dams by instinct and directing water. Deep learning lol.

dr_dshiv wrote at 2021-12-03 12:29:06:

I’d like to see something similar with quantum computing. How might AI make it easier to play around in the space to uncover functional relationships?

pontusrehula wrote at 2021-12-03 09:29:39:

Is there an arxiv link to the paper the author wrote about the Kazhdan-Lusztig polynomials?

T-A wrote at 2021-12-03 14:59:46:

https://arxiv.org/abs/2111.15161

dexter89_kp3 wrote at 2021-12-03 01:35:21:

Not in favor of the headline. The deepmind blog clearly states

"Our results suggest that ML can complement maths research to guide intuition about a problem by detecting the existence of hypothesised patterns with supervised learning and giving insight into these patterns with attribution techniques from machine learning"

dang wrote at 2021-12-03 01:51:13:

The submitted headline was "Podcast: Google's 'DeepMind' Does Mathematics" because the submitted URL was

https://www.numberphile.com/videos/deep-mind-podcast

.

Since the title of the article we changed it to is vague and baity, I replaced it with a representative sentence from the article body, i.e. where Williamson describes the purpose of the research.

anonymousDan wrote at 2021-12-03 10:22:59:

Can anyone elaborate on what these attribution techniques might be? My understanding is that explainability is a bit of a holy grail for DNN research - are reliable techniques starting to emerge?

asdfman123 wrote at 2021-12-03 00:14:26:

Math does itself?

loofatoofa wrote at 2021-12-03 00:45:54:

Gross

bionhoward wrote at 2021-12-03 06:44:19:

Everyone bumps into the same thing over and over. Structure IS Function. Shouldn’t we talk about “What does that MEAN?”

I’m obsessed with science and my science feelers are _on fire_.

Here’s my rant/manifesto about the topic:

A Fatal Error, or

A Space+Time Trajectory Theory of Abstraction and “Stuff”

https://docs.google.com/document/d/1q8_34UzIaJXHtUy_K_sf2bHc...

Wasn’t Einstein just talking about Duck Typing? Seriously. Think about it. Do we live in a slideshow universe? Have we fully considered every consequence of not living in a slideshow universe?

This is bigger than math, people! Wake up!