Google DeepMind’s FunSearch cracks mathematical puzzles with a LLM



summary
Summary

Google Deepmind’s FunSearch uses a language model to find a previously unknown and better solution to a mathematical problem.

According to the Infinite Monkey Theorem, a monkey with a typewriter and an infinite amount of time would eventually produce Shakespeare – or probably previously unknown mathematical formulas. With FunSearch, Google Deepmind has not put a monkey in front of a typewriter, but rather a “stochastic parrot”, as large language models are called by critics, which systematically produces better and better results via a feedback loop. FunSearch stands for search in function space.

Unlike in other cases, at the end of the process there was a solution to a scientific puzzle — the first time a language model had discovered such a solution, says Google Deepmind. “It’s not in the training data—it wasn’t even known,” says co-author Pushmeet Kohli, the company’s vice president of research.

FunSearch combines a LLM with an evolutionary algorithm

FunSearch uses Codey, a code-specific variant of Google’s PaLM 2, to generate new snippets of code in an existing code framework that can produce solutions to specific mathematical problems. The system checks whether the generated solutions are better than the known ones. The best suggestions are then sent back to Codey with feedback, and the process is repeated iteratively. FunSearch thus combines an evolutionary algorithm with a language model. “The way we use the LLM is as a creativity engine,” says DeepMind computer specialist Bernardino Romera-Paredes.

Ad

Ad

After a few days and millions of suggestions, FunSearch found a code that contained a correct and previously unknown solution to the “cap set problem“. The cap set problem in mathematics is to determine the maximum size of a set of integers within a certain range, where no three different elements of the set form an arithmetic progression.

Unlike Google Deepmind’s AlphaTensor, which is based on methods inspired by AlphaZero, the use of the language model allows the method to be extended to other problem domains. To test this, the team also applied FunSearch to the bin-packing problem, which involves packing objects into as few containers as possible. Solutions to this problem have implications in both the real world and the digital space – the latter of which Google Deepmind focused on. The system found a solution faster than any previously developed by humans.

FunSearch is the starting signal for automatically adapted algorithms

Another advantage is that the solutions found by FunSearch are available in the form of code — and can therefore be viewed and understood. However, the method requires good feedback signals, which are not available when generating evidence, for example. However, the team expects FunSearch’s performance to scale with the power of language models:

“The rapid development of LLMs is likely to result in samples of far superior quality at a fraction of the cost, making FunSearch more effective at tackling a broad range of problems,” the article says. “We envision that automatically-tailored algorithms will soon become common practice and deployed in real-world applications.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top