In his book "Shadows of the Mind", Roger Penrose details an argument based on Godel's Incompleteness theorem to prove that the mathematical capabilities of human mathematicians are non-computable.

Here is a brief description and proof of Godel's Incompleteness Theorem.

- Let X and Y be members of a data language
**D**whose members can be used to represent executable programs that act on elements of**D**. - A program X applied to input data Y is written X(Y), and may or may not terminate in a finite number of steps.
- Let
**pair**be a function which maps D × D to some subset of D and is invertible. The function**pair**and its inverse should be computable. By an abuse of notation, abbreviate F(pair(X,Y)) as F(X,Y). - Let
**F**be a function with the following property - if F(X,Y) terminates, then X(Y) does not terminate. Such an F is to be called**consistent**. - If the converse holds for such an F, i.e.

for all X and Y, X(Y) does not terminate => F(X,Y) terminates,

then we call F**complete.** - Godel's theorem tells us that there is no F which is both consistent
and complete. Proof:
- Let F be consistent, and define G such that G(X)=F(X,X).
- Then G(G) = F(G,G), and if G(G) terminates then this implies G(G) does not terminate.
- Which is a contradiction, so we conclude that G(G) does not terminate.
- Thus F is not complete.
**Q.E.D.**Also F can be extended to a new F', which is in a sense more complete, with F' defined as F'(X,Y) = {if ((X=G) and (Y=G)) then terminate else F(X,Y)}.

- The F' defined in the above proof is consistent, and it recognises a larger set of non-terminating programs, i.e. the set that F recognises, and G(G).

We can regard F as representing a set of logical axioms and rules of proof in a mathematical theory that enables us to prove theorems about the non-termination of computer programs. Thus an execution of F(X,Y) is a "proof" of the theorem that X(Y) does not terminate. The system F is incomplete, in that it does not prove all statements about non-termination which are true, and F can be explicitly extended to prove a larger set of true statements. Of course the extended F is still incomplete (for exactly the same reason) and can itself be extended, and so on.

We can regard F as an algorithm that specifies the operation of a robotic mathematician tasked with proving theorems about non-terminating programs. Godel's Incompleteness theorem tells us that merely by inspecting the design of a such a robot, we can "know" something that the robot does not know, i.e. that G(G) as defined above is non-terminating.

Let us make the following assumptions -

- Human beings operate according to the laws of physics.
- The laws of physics are computable, i.e. the behaviour of all physical systems can be predicted using algorithmic calculations.
- From the behaviour of a human mathematician we can extract reliably correct theorems about non-terminating programs (e.g. when the mathematician writes a theorem down, submits it to a respected journal, and states when asked that they are "really, really sure" that the proof of their theorem is correct, then we assume the theorem is reliably correct).

From these assumptions we come to the conclusion that there is some algorithm F which is equivalent to the ability of a human mathematician to state correct theorems about non-terminating programs.

But we can derive G from F, as above, and know that G(G) is a non-terminating program.

But the reader is a human, so we have just proved that a human can know something that a human cannot know.

This is a contradiction, so one of the initial assumptions is wrong.

Roger Penrose wants to come to the conclusion that assumption 2 is incorrect. Some of his opponents challenge assumption 3, i.e. the infallibility of human mathematicians.

I am willing to concede that assumption 3 is correct, for the sake of argument, with the caveat that it may cause difficulties in practice if an attempt is made to carry out procedures based on that assumption. I want to concede the correctness of assumption 3, because I believe that the real fallacy in Penrose's argument lies elsewhere.

To make the contradiction obvious, let the human mathematician who understands that G(G) is non-terminating be the same human mathematician for whom F determines their mathematical ability. If the mathematician was a robot, telling them that G(G) is non-terminating would cause a genuine increase in their mathematical ability. But Roger Penrose claims that the mathematician already knows G(G) is non-terminating, because they understand the Godelian argument.

I will show that this is not the case. We must return to basic principles. The task assigned to the function F is the following -

- Given program X and data Y, determine if X(Y) does not terminate.

The human mathematician is given X=G and Y=G, with G defined such that G(x)=F(x,x) where F is the function that describes the mathematician's mathematical ability. The mathematician does not know that F is the function that describes their mathematical ability, unless we tell them so. If they did recognise F, then their understanding of the Godelian argument would allow them to determine that G(G) derived from F is non-terminating.

If we tell the mathematician that F is the program determining their mathematical ability, then we are giving them extra information, and that is what enables them to state that G(G) is non-terminating, apparently going beyond the capability determined by F.

We can just as easily program a robot mathematician to accept claims made by trustworthy parties about things that the robot does not already know, for example that a function F is the function that determines that robot's mathematical ability. But the moment that the robot accepts that information, F goes out of date as a description of that robot's mathematical ability.

If we ignore the difficulties of describing a mathematician's brain precisely enough to be able to formulate F, and we also ignore the question of infallibility of human mathematical ability, then the answer is "yes". There are two things that can happen to a mathematician's understanding of mathematics after they learn of an F that describes their mathematical ability:

- F is simple enough that the mathematician can commit it to memory without sacrificing any of their existing mathematical ability. In which case the mathematician's ability to prove theorems entirely on their own has definitely increased.
- F is so large that the mathematician cannot commit it to memory. Their mathematical ability is only increased if they are allowed to retain a copy of F on some external medium and refer to it as necessary.

The fact that the F that bounds a mathematician's mathematical ability can be increased just by giving them a (very large) bit of paper with something written on it alerts us to the necessity of being very specific about what resources are available to a mathematician when we attempt to determine an algorithmic bound on the mathematical ability of that mathematician.

It's just the same argument, but with a different starting point.

Yet another starting point. The fallacy is obvious if we apply this improvement
to a robot mathematician. The derived F will take account of everything
**except** the existence of the analysing apparatus.

One implication of Roger Penrose's theory is that human mathematicians
are capable of producing an un-ending sequence of mathematical truths unbounded
by any algorithmic procedure. Thus I have started the
** Algorithmically Unbounded Journal of Mathematical
Truths**.

This web page is licensed under the Gnu Free Documentation Licence.

Copyright © Philip Dorrell 1999-2007