Tags
artificial intelligence, computers, conjecture, induction, logic, mathematicians, mathematics, proof
The short answer is, certainly not yet.
Can they ever be? From reading this article and my own experience with the geometry-proving-and-construction software called Geometrix, written by my friend and colleague Jacques Gressier, I am not sure it’s possible at all.
Here is an interesting article that I’m copying and pasting from Jerry Becker at SIU, who got it from Quanta:

A proof is a step-by-step logical argument that verifies the truth of a conjecture, or a mathematical proposition. (Once it’s proved, a conjecture becomes a theorem.) It both establishes the validity of a statement and explains why it’s true. A proof is strange, though. It’s abstract and untethered to material experience. “They’re this crazy contact between an imaginary, nonphysical world and biologically evolved creatures,” said the cognitive scientist Simon DeDeo of Carnegie Mellon University, who studies mathematical certainty by analyzing the structure of proofs. “We did not evolve to do this.”
Computers are useful for big calculations, but proofs require something different. Conjectures arise from inductive reasoning – a kind of intuition about an interesting problem – and proofs generally follow deductive, step-by-step logic. They often require complicated creative thinking as well as the more laborious work of filling in the gaps, and machines can’t achieve this combination.
Computerized theorem provers can be broken down into two categories. Automated theorem provers, or ATPs, typically use brute-force methods to crunch through big calculations. Interactive theorem provers, or ITPs, act as proof assistants that can verify the accuracy of an argument and check existing proofs for errors. But these two strategies, even when combined (as is the case with newer theorem provers), don’t add up to automated reasoning.
——————–
Plus, the tools haven’t been met with open arms, and the majority of mathematicians don’t use or welcome them. “They’re very controversial for mathematicians,” DeDeo said. “Most of them don’t like the idea.”
A formidable open challenge in the field asks how much proof-making can actually be automated: Can a system generate an interesting conjecture and prove it in a way that people understand? A slew of recent advances from labs around the world suggests ways that artificial intelligence tools may answer that question. Josef Urban at the Czech Institute of Informatics, Robotics and Cybernetics in Prague is exploring a variety of approaches that use machine learning to boost the efficiency and performance of existing provers. In July, his group reported a set of original conjectures and proofs generated and verified by machines. And in June, a group at Google Research led by Christian Szegedy posted recent results from efforts to harness the strengths of natural language processing to make computer proofs more human-seeming in structure and explanation.
Mathematicians, logicians and philosophers have long argued over what part of creating proofs is fundamentally human, and debates about mechanized mathematics continue today, especially in the deep valleys connecting computer science and pure mathematics.
For computer scientists, theorem provers are not controversial. They offer a rigorous way to verify that a program works, and arguments about intuition and creativity are less important than finding an efficient way to solve a problem. At the Massachusetts Institute of Technology, for example, the computer scientist Adam Chlipala has designed theorem-proving tools that generate cryptographic algorithms – traditionally written by humans – to safeguard internet transactions. Already, his group’s code is used for the majority of the communication on Google’s Chrome browser.
———————-
“You can take any kind of mathematical argument and code it with one tool, and connect your arguments together to create proofs of security,” Chlipala said.
In math, theorem provers have helped produce complicated, calculation-heavy proofs that otherwise would have occupied hundreds of years of mathematicians’ lives. The Kepler conjecture, which describes the best way to stack spheres (or, historically, oranges or cannonballs), offers a telling example. In 1998, Thomas Hales, together with his student Sam Ferguson, completed a proof using a variety of computerized math techniques. The result was so cumbersome – the results took up 3 gigabytes – that 12 mathematicians analyzed it for years before announcing they were 99% certain it was correct.
The Kepler conjecture isn’t the only famous question to be solved by machines. The four-color theorem, which says you only need four hues to color any two-dimensional map so that no two adjoining regions share a color, was settled in 1977 by mathematicians using a computer program that churned through five-colored maps to show they could all be reduced to four. And in 2016, a trio of mathematicians used a computer program to prove a longstanding open challenge called the Boolean Pythagorean triples problem, but the initial version of the proof was 200 terabytes in size. With a high-speed internet connection, a person could download it in a little over three weeks.
———————–
Another gripe is that if they want to use theorem provers, mathematicians must first learn to code and then figure out how to express their problem in computer-friendly language – activities that detract from the act of doing math. “By the time I’ve reframed my question into a form that could fit into this technology, I would have solved the problem myself,” Harris said.
Many just don’t see a need for theorem solvers in their work. “They have a system, and it’s pencil and paper, and it works,” said Kevin Buzzard, a mathematician at Imperial College London who three years ago pivoted his work from pure math to focus on theorem provers and formal proofs. “Computers have done amazing calculations for us, but they have never solved a hard problem on their own,” he said. “Until they do, mathematicians aren’t going to be buying into this stuff.”
Others say theorem provers can be useful teaching tools, in both computer science and mathematics. At Johns Hopkins University, the mathematician Emily Riehl has developed courses in which students write proofs using a theorem prover. “It forces you to be very organized and think clearly,” she said. “Students who write proofs for the first time can have trouble knowing what they need and understanding the logical structure.”
Theorem provers also offer a way to keep the field honest. In 1999, the Russian American mathematician Vladimir Voevodsky discovered an error in one of his proofs. From then until his death in 2017, he was a vocal proponent of using computers to check proofs. Hales said that he and Ferguson found hundreds of errors in their original proof when they checked it with computers. Even the very first proposition in Euclid’s Elements isn’t perfect. If a machine can help mathematicians avoid such mistakes, why not take advantage of it? (The practical objection, justified or not, is the one suggested by Harris: If mathematicians have to spend their time formalizing math to be understood by a computer, that’s time they’re not spending doing new math.)
Today’s theorem provers weren’t designed to be mathematician-friendly. ATPs, the first type, are generally used to check if a statement is correct, often by testing possible cases. Ask an ATP to verify that a person can drive from Miami to Seattle, for example, and it might search all cities connected by roads leading away from Miami and eventually finding a city with a road leading into Seattle.
But here’s the rub: What an ATP doesn’t do is explain its work. All that calculating happens within the machine, and to human eyes it would look like a long string of 0s and 1s. Huang said it’s impossible to scan the proof and follow the reasoning, because it looks like a pile of random data. “No human will ever look at that proof and be able to say, ‘I get it,'” he said.
Efforts at the forefront of the field today aim to blend learning with reasoning. They often combine ATPs with ITPs and also integrate machine learning tools to improve the efficiency of both. They envision ATP/ITP programs that can use deductive reasoning – and even communicate mathematical ideas – the same way people do, or at least in similar ways.
Josef Urban thinks that the marriage of deductive and inductive reasoning required for proofs can be achieved through this kind of combined approach. His group has built theorem provers guided by machine learning tools, which allow computers to learn on their own through experience. Over the last few years, they’ve explored the use of neural networks – layers of computations that help machines process information through a rough approximation of our brain’s neuronal activity. In July, his group reported on new conjectures generated by a neural network trained on theorem-proving data.
Urban was partially inspired by Andrej Karpathy, who a few years ago trained a neural network to generate mathematical-looking nonsense that looked legitimate to nonexperts. Urban didn’t want nonsense, though – he and his group instead designed their own tool to find new proofs after training on millions of theorems. Then they used the network to generate new conjectures and checked the validity of those conjectures using an ATP called E.
The network proposed more than 50,000 new formulas, though tens of thousands were duplicates. “It seems that we are not yet capable of proving the more interesting conjectures,” Urban said.
————————–
—————————-
Szegedy at Google Research sees the challenge of automating reasoning in computer proofs as a subset of a much bigger field: natural language processing, which involves pattern recognition in the usage of words and sentences. (Pattern recognition is also the driving idea behind computer vision, the object of Szegedy’s previous project at Google.) Like other groups, his team wants theorem provers that can find and explain useful proofs.
Inspired by the rapid development of AI tools like AlphaZero – the DeepMind program that can defeat humans at chess, Go and shogi – Szegedy’s group wants to capitalize on recent advances in language recognition to write proofs. Language models, he said, can demonstrate surprisingly solid mathematical reasoning.
His group at Google Research recently described a way to use language models – which often use neural networks – to generate new proofs. After training the model to recognize a kind of treelike structure in theorems that are known to be true, they ran a kind of free-form experiment, simply asking the network to generate and prove a theorem without any further guidance. Of the thousands of generated conjectures, about 13% were both provable and new (meaning they didn’t duplicate other theorems in the database). The experiment, he said, suggests that the neural net could teach itself a kind of understanding of what a proof looks like.
“Neural networks are able to develop an artificial style of intuition,” Szegedy said.
———————–
After all, if the machines continue to improve, and they have access to vast amounts of data, they should become very good at doing the fun parts, too. “They will learn how to do their own prompts,” Gowers said.
Harris disagrees. He doesn’t think computer provers are necessary, or that they will inevitably “make human mathematicians obsolete.” If computer scientists are ever able to program a kind of synthetic intuition, he says, it still won’t rival that of humans. “Even if computers understand, they don’t understand in a human way.”
--