The most powerful (and useless) algorithm

Описание к видео The most powerful (and useless) algorithm

0:00 Intro
2:44 The Algorithm
6:38 Why it works
9:28 Code
10:41 Final Thoughts

Our implementation of Universal Search: https://github.com/polylog-cs/univers...

Our Patreon:   / polylog  

Impromptu: https://impromptu.fun/

More about universal search:

-- To prove that the universal search is asymptotically optimal, we implicitly used the fact that there exists a linear time algorithm for multiplying two numbers. This is true if you work with standard models of computers like the so-called Word RAM or pointer machine (Schonhage's algorithm, see 4.3.3. in Art of Computer Programming). For arithmetic operations, people also often consider a bit different model, where the time complexity of multiplication is O(n log n) (algorithm by Harvey & van der Hoeven).

-- A historical note: Levin presented his universal search in the same paper where he presented his discovery of NP-completeness which was independent of Steve Cook. This is why the main theorem about NP-completeness is called Cook-Levin Theorem. The paper of Levin is from early 1970's. Blum's speedup theorem (see next paragraphs) is even older, from 1960's. You can see how people back then thought about these super fundamental questions like "Is it clear that there is an asymptotically best algorithm for every problem?". It turned out that some of the results proven back then (Blum's speedup theorem, universal search) were too abstract to be useful, and it was the concept of NP completeness and related concepts that won and now they form the basis of our theory of algorithms.

-- The way we explained universal search, it is applicable only to "search problems" where you are searching for something (factors of a number) and once you find it, it is easy to check that what you found is correct. However, there exists an improved universal search by Hutter that searches not only over all programs, but also over all possible mathematical proofs that analyze those programs. This way, you can get an asymptotically optimal algorithm for any problem you like, except for some absolutely insane cases like when the fastest algorithm cannot be mathematically proven to work (although it works) or the time complexity f(n) is so complicated function that computing the value of f(n) takes more than O(f(n)) time. In fact, Hutter's search can be implemented in such a way that for algorithm with time complexity f(n), it takes only 1.01f(n) + O(1) time (for absolutely humongous O(1)).

-- On the other hand, there is a so-called Blum's speedup theorem that says that there exists a certain problem and a certain function f(n) such that you can have algorithms for this problem with complexities O(f(n)), O(log(f(n))), O(log(log(f(n)))) and so on but there is no fastest algorithm. Why is this not contradicting Hutter's universal search? Because this function f(n) is so weird that computing f(n) takes more than O(f(n)) time.



Big thanks to: Tomáš Gavenčiak, Matěj Konečný, Jan Petr, Hanka Rozhoňová, Tom Sláma
Also, big thanks to 3blue1brown and the manim community for manim!

Credits:
To make this video, we used manim, a Python library: https://docs.manim.community/en/stable/
The color palette we use is solarized: https://ethanschoonover.com/solarized/
music: Thannoid by Blue Dot Sessions: https://app.sessions.blue/browse/trac...
picture of monkey: DALL-E 2
picture of Leonid Levin: https://www.cs.bu.edu/~lnd/
picture of Weierstrass and Cantor function: Wikimedia Commons
Brainfuck suggestion: ChatGPT
Levin’s quote taken from here: http://www.hutter1.net/idsia/nipsws.htm
He definitely said something similar here: https://www.cs.bu.edu/fac/lnd/expo/qc...

Комментарии

Информация по комментариям в разработке