💾 Archived View for dioskouroi.xyz › thread › 24936882 captured on 2020-10-31 at 00:57:30. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
________________________________________________________________________________
Note that what makes this notable is that the time bound is _proven_ (according to the paper, anyway). N^1/5 is much slower than we normally think of GNFS as being, but the thing to note is that GNFS isn't actually _proven_ to run that fast.
Subexponential algorithms in general have long had proven running times. Dixon's method [1], in particular, has had a proven running time since 1980. The number field sieve itself has had some recent progress on that front [2].
However, those algorithms are _probabilistic_, not _deterministic_, which is what sets this linked method apart.
This algorithm, much like Pollard-Strassen before it, is pretty much irrelevant to integer factorization in practice. But it is nevertheless remarkable to see some progress on deterministic factorization after such a long time.
[1]
https://doi.org/10.1090/S0025-5718-1981-0595059-1
[2]
https://doi.org/10.1016/j.jnt.2017.10.019
Why is it irrelevant in practice? Because the probabilistic algorithms are so much better in practice?
Yes. Between Pollard's rho, SQUFOF, ECM, the various sieves, etc, there is essentially no range at which this one performs better.
I see, thank you for the correction!
I'm a PhD student who studies stuff similar to this (I work in complexity theory), and one of my favorite "conversation starters" when I meet other researchers is whether they think integer factorization is actually a computationally difficult task.
Perhaps surprisingly, most people don't have strong feelings either way, and a good number of researchers suspect the task is actually efficiently computable (i.e., in polynomial-time). At least from the standpoint of complexity theory there are not really any compelling reasons to think factoring is hard, beyond the fact that we haven't discovered an algorithm for it yet.
Maybe there is something I don't get, but N ^ 1/5 looks polynomial to me. Even quite small polynome power!
Doesn't cracking this problem boil down to finding a pattern in prime numbers, which doesn't seem to exist? For the researchers that think the task is efficiently computable in polynomial time, whats the reason behind their thinking?
note, for people who were briefly confused like me, that to be considered polynomial time, it has to be polynomial in the number of bits, i.e. log(N).
Yep, this can be considered pseudo-polynomial time.
The abstract says "computes the prime factorisation of a positive integer N", so it's not your fault you were confused. The abstract should be corrected.
The abstract is correct. A positive integer N is log(N) bits long. If X = log(N), than a algorithm that runs in N^(1/5) will run in 2^(X/5) time.
There’s nothing wrong with the abstract as written. It makes no claim that the algorithm is polynomial-time.
I love a day when I see/understand an idea that is new to me.
This is an extremely well written paper. A high-school student with motivation could understand all of it in about 1 week of work.
The key idea here is in Lemma 3.3 which is a simplified presentation of Lehman's original idea. All of the "good" algorithms in this class are exploiting this observation.
Hittmeir adds a new ingredient, namely applying "baby-step-giant-step" searching. And Harvey makes a slightly better parameter choice and strategy than Hittmeir to get his result.
The new idea and value to me would be the concise explanation in Lemma 3.3.
A lot of the feedback is of the form, "who cares, since it doesn't push the state of the art in practical factoring", but for me, there is art in ideas like this. Lehman, Hittmeir and Harvey have put forth a really elegant and beautiful line of thought here---that is accessible!---and I hope you can enjoy it without too much worry about how much it is worth.
Does this have practical implications, as in, is this an algorithm that you'd run in practice, or is it mainly a theoretical bound due to constants hidden in the landau symbol?
The General Number Field Sieve[1], referenced in the paper, is believed to have a better asymptotic runtime performance (and has a number of efficient implementations), but it's runtime is not proven. ECM [2](also referenced in the paper) has better proven, but probabilistic, performance.
So most likely, no. This is primarily of theoretical importance (it is now the fastest, deterministic algorithm with a proven bound), but is not immediately a performance gain on existing algorithms.
[1]
https://en.wikipedia.org/wiki/General_number_field_sieve
[2]
https://en.wikipedia.org/wiki/Lenstra_elliptic-curve_factori...
Sounds like GNFS is still better if you believe its conjectured computational complexity, but this still seems like a good advance. Paper's over my head for 10 minutes reading though.