One of the great discoveries of 20th century mathematics was that any dispute about the validity of a mathematical proof can always be resolved. That is the case put forward by the English mathematician Sir Timothy Gowers in his book “Mathematics: A Very Short Introduction”, written for the general reader.
This is because every mathematical argument can always be broken down into even smaller substeps, which would result in even more clearly valid substeps every time. It follows therefore that in principle every dispute about a proof must eventually come to an end.
Nevertheless, there are very long and complex proofs or unresolved problems for which most mathematicians cannot say with absolute certainty that there is no error somewhere within the long sequence of substeps or whether it will ever come to an end.
In view of the above, why isn't mathematical research impossible? Timothy Gower will visit ETH Zurich on Wednesday to answer this question as part of this year's Wolfgang Pauli Lectures (see box).
No fear of the big questions
Gowers is a mathematics Professor at Cambridge who specialises in a number of areas, particularly in the theory of Banach spaces and combinatorics. In 1998, he was awarded the Fields Medal in Berlin for his remarkable solutions of mathematical problems, some of which were posed by the Polish mathematician Stefan Banach (1892-1945) in the 1930s. Gowers is known for his ability to familiarise a lay audience with mathematical ideas. He was knighted in 2012 for his services to mathematics.
Timothy Gowers does not avoid the major, unresolved problems. On the contrary, he claims that mathematical research mainly stands out through its capacity to tackle unresolved and particularly difficult problems.
Moreover, he also looks at the not uncontroversial issue of whether computers are better at solving the most difficult mathematical problems – or are complicated proofs the exclusive domain of human beings?
The computer and us
In his lecture, Sir Timothy will address the problem that it is computationally hard to determine whether a given mathematical statement can really be proven. There is no general algorithm and therefore no rule to resolve and ultimately answer this problem in a finite number of well-defined and specific steps.
Even if the problem is deliberately restricted to proofs of a specific maximum length, it remains uncertain whether an algorithm can identify a proof within a reasonable time. Gowers argues that human beings, on the other hand, always somehow manage to find even long and complex proofs. Does that mean the human mind is superior to the computer?
Gowers will argue in his Wolfgang Pauli Lecture that this seeming paradox can be resolved without appealing to some mysterious property of the human brain that computers could never hope to emulate.
Gowers claims in his very short introduction that computerised performance will develop over the next hundred years and gradually be able to emulate mathematicians in more areas until they eventually will supplant us entirely.
To the limits of calculation and proof
The lecture comes down to the possibilities and limits of the algorithmic method and specific questions relating to the so-called theory of computation and mathematical proof – two research fields that incorporate mathematics as well as logic and computer science. “Personally, I am curious about the new ideas about these two theories with which Gowers plans to surprise the audience”, says Rahul Pandharipande, ETH Professor of mathematics and host of this year's Wolfgang Pauli lectures.
Could the answer possibly lie in not having people and computers working against each other but instead in a division of labour? In 2009, he called in his mathematics blog “
Gowers's Weblog. Mathematics related discussions
” for many mathematicians to come together online and cooperate informally to find the best way to resolve the most challenging mathematical problems. The Polymath project did in fact deliver new results.
In his book, Gowers answers the question: “How is mathematical research possible?” by saying that mathematical problems should be challenging enough to leave just a glimmer of hope that they can be resolved.