To prove that no recursive theory of addition and multiplication of the counting numbers can be complete, Gödel relies on the distinction between the subjective and the objective. I suggested this in “Subjective and Objective,” while noting also that, for a computer, all is subjective.
Roger Penrose does not use such language, but draws the conclusion that it warrants. Here he is, in “Précis of The Emperor’s New Mind,” Behavioral and Brain Sciences, 1990:
there is good evidence that conscious thinking is itself not an algorithmic activity, and that consequently the brain must be making use of non-algorithmic physical processes in an essential way whenever consciousness comes into play. There must accordingly be aspects of the brain’s action that cannot be properly simulated by the action of a computer, in the sense that we understand the term “computer” today.
More precisely (as I said in “Subjective and Objective”), the brain does not use a process, but we may use our brain in a process, in the way that we use our arms and legs.
On the hypothesis that there is a computer passing the Turing test (and I allowed in “Artificial Language” that we seemed to have such computers), Penrose concludes,
We must address the operationalist’s claim that such computer must be said to think, feel, understand, et cetera, merely by virtue of its passing. It is certainty my own view that such mental qualities – and certainly the central one of consciousness – are objective physical attributes that an entity may or may not possess.
Penrose’s “own view” makes no sense to me. For him, as far as I can tell, “objective physical attributes” are just “physical attributes,” as in the sequel:
In our Turing-test probing, we are merely doing our best to ascertain, using the only means available to us, whether the entity in question (in this case the computer) has the physical attributes under consideration (in this case, say, consciousness). To my mind, the situation is not different in principle from, for example, an astronomer trying to ascertain the mass of a distant star.
As far as I understand, the concept of mass is an hypothesis to explain phenomena that can be measured directly, such as
- weight, on earth;
- in the heavens, the distance at which, and the speed with which, two stars orbit one another.
Consciousness is not such an hypothesis.
Physical attributes will be agreed on across ideological boundaries, as C. F. von Weizsäcker observed in “Science and the Modern World,” which is chapter 1 of The Relevance of Science (1964):
That physics is science and dialectical materialism is not, for example, became clear, in 1955 at the first Geneva conference on the peaceful use of atomic energy. There many western and Soviet physicists met for the first time, and a good deal of classified information was made public. It was a great experience to see that the numerical values of the same atomic constants, measured in deep secrecy in different countries under opposed political systems and creeds, when compared, turned out to be identical down to the last decimal. Nothing of the sort happened with respect to theories on society.
Neither has anything of the sort happened with theories of consciousness, as far as I know. As evidence, I introduce remarks by David L. Gilden and Joseph S. Lappin in “Where is the material of the emperor’s mind?” – which is one of the commentaries published along with Penrose’s Précis. (The bold emphasis is mine, as usual.)
An inquiry into what a mind is always presumes a point of departure, although the implied commitments ae rarely manifest. There are two styles of inquiry that can be differentiated in terms of their approach to the relationship between subjectivity and objectivity. The division in style may appear to be metaphysical (and therefore not interesting), but the issue of this relationship is fundamental to the types of questions that get asked and even to the criteria for recognizing an answer.
I wrote “Subjective and Objective” because of my doubts about the meaning of those paired concepts. I gathered some sources that seemed to illustrate the meaning. Another source that I could have included was “Taking Humanity Seriously” (Fare Forward, September 30, 2020). Here, Jennifer Frey writes of visiting
a psychology course aimed at promoting happiness – “Psychology and the Good Life” – [which] had a current enrollment of 1,200 students, which is about a quarter of Yale’s undergraduate population …
There was a moment during our onstage dialogue when I introduced a familiar philosophical thought experiment aimed at generating the intuition that a good human life must have a self-transcendent dimension, that it must make contact with an objective reality outside ourselves that we can really and truly affirm is good. I asked the students to imagine a virtual reality machine that is so advanced one can no longer discern reality from simulation. On the one hand, once plugged into this machine, one seems to experience all the things one wishes for: love, security, professional success, and pleasures of various kinds and degrees. Of course, none of it would be real, but it would seem real. In short, it would meet all the criteria for subjective well-being. On the other hand, to plug oneself into this machine would be to remove oneself from the human world, to refuse the difficult task of being human.
Perhaps few students had read Plato’s Republic; otherwise, Frey might have referred to the similar thought experiment proposed by Glaucon in Book II. Do you want to be just, or only seem just? Glaucon chooses the latter. By Book V, he realizes that in justice as in health, you want the reality, not the appearance.
According to Frey though, the Yale students’ instructor prefers the appearance:
The question I put to the students was this: would they choose to plug themselves into this machine? Is such a life aspirational or worthy of imitation? Could we call such a life “happy”? It is instructive, to say the least, that Yale’s renowned expert on “the good life,” the one to whom throngs of well-heeled students go for lessons on how to be happier, enthusiastically pronounced that she would, without hesitation, lie motionless and alone inside of this machine for the rest of her days …
If Frey’s reporting is accurate, I am relieved not to have attended such a university, but sad for those who did. According to Laurie Santos, creator of the course that Frey visited,
Our goal is
- to equip students with scientifically validated strategies for living a more satisfying life, while also
- creating opportunities for high-striving low-income students and students of color to demonstrate college-readiness.
Should the latter goal not be to create opportunities to become ready for college?
The former goal would seem to make clear the side that Santos takes in the division that Wendell Berry proposes:
What I am against – and without a minute’s hesitation or apology – is our slovenly willingness to allow machines and the idea of the machine to prescribe the terms and conditions of the lives of creatures, which we have allowed increasingly for the last two centuries, and are still allowing, at an incalculable cost to other creatures and to ourselves. If we state the problem that way, then we can see that the way to correct our error, and so deliver ourselves from our own destructiveness, is to quit using our technological capability as the reference point and standard of our economic life. We will instead have to measure our economy by the health of the ecosystems and human communities where we do our work.
It is easy for me to imagine that the next great division of the world will be between people who wish to live as creatures and people who wish to live as machines.
This is from the end of a section called “Creatures As Machines” in “On Edward O. Wilson’s Consilience,” itself a chapter of Life Is a Miracle: An Essay Against Modern Superstition (2000). I quoted the last paragraph also, perhaps obscurely, when writing “On Kant’s Groundwork.” I wrote in “Astronomy Anomaly” on being led to Berry’s book by John Warner.
This is the post that I said I had been working on at the head of “Omniscience.” However, parts of the draft have already been included in “Artificial Language.”
After some preliminary remarks, the Declaration of Independence of the United States of America continues:
We hold these truths to be self-evident,
- that all men are created equal,
- that they are endowed by their Creator with certain unalienable rights,
- that among these rights are
- Life,
- Liberty, and
- the pursuit of Happiness.
What does it mean to have a right to life, when death is inevitable? Possibly it has to do with “what the hell has gone wrong in this twentieth century,” in the words of Robert Pirsig, published in 1974 in Zen and the Art of Motorcycle Maintenance. I quoted those words in “Prairie Life.” I plan to look more at them in a later post (which will be “The System,” though without actually quoting the words again). Now I want to look at the possibility of our holding truths at all.
For telling whether certain other people are telling the truth, there is said to be an algorithm. I found a particular version of the algorithm, and it may be useful for putting current events in some perspective. Here is Robert McElvaine, “When a lying politician tells a bigger truth,” New York Daily News, October 9, 2022. (The link gave me the message, “This content is not available in your region,” so I went to the Internet Archive for the text.)
Lyndon Baines Johnson was a notorious liar. Reporters covering him liked to say, “How do you know when Lyndon Johnson is telling the truth?” The answer: “When he pulls his ear lobe, scratches his chin, he’s telling the truth. When he begins to move his lips, you know he’s lying.” Yet, on this day 58 years ago, Johnson told an ultimate truth about America – a truth our nation desperately needs to hear today.
On October 9, 1964, Johnson gave a campaign speech at the Jung Hotel in New Orleans.
“The people that would use us and destroy us first divide us,” he continued, “And all these years they have kept their foot on our necks by appealing to our animosities and dividing us.”
Is this the promised “ultimate truth about America”? McElvaine does not seem to highlight it as such. Neither does he highlight any other assertion by Johnson. Nonetheless, the one that I have quoted may serve as an ultimate truth, although unfortunately it is not unique to the United States. Divide and conquer: this is not really a conjunction of two commands, but an implication if you divide, then you will conquer – if that is what you want do to. McElvaine has observed meanwhile,
Nineteen-sixty-four was the year Republicans abandoned the identification as the Party of Lincoln and the Democrats embraced it. One striking example: “I now believe I know how it felt to be a Jew in Hitler’s Germany,” lifelong Republican Jackie Robinson said of being on the floor of the 1964 Republican National Convention …
McElvaine refers to
what would be said implicitly in later years down to the present, to woo white people away from the Democratic Party. These days, the words are wrapped in slightly subtler language …
Apparently that language has been subtle enough to woo some black people from the Democratic Party as well. Or is it not just words that do this?
I set out here to look at what it means when we say certain words are true.
We have certain machines for telling what is true: abacus, slide rule, pocket calculator. We may use such machines in order to assert for ourselves what is true.
There is an idea that we are already machines, running on an algorithm. Perhaps sometimes we can say what the algorithm is, but usually not. The algorithm would then be run by our unconcious or our brain, and we would simply be fed the result.
Nonetheless, an algorithm cannot be the only way we decide what is true. I understand the proof of this to be due to Roger Penrose, supplementing that of J. R. Lucas. The proof depends on Gödel’s Incompleteness Theorem.
I had been aware of Penrose’s book The Emperor’s New Mind. I had heard Penrose speak in Vienna, April 29, 2006, at the conference on Gödel, held when Gödel would have turned a hundred. I knew that Wikipedia said something like what it now does:
The Penrose–Lucas argument about the implications of Gödel’s incompleteness theorem for computational theories of human intelligence was criticized by mathematicians, computer scientists, and philosophers, and the consensus among experts in these fields is that the argument fails, with different authors attacking different aspects of the argument.
If there is such a consensus, I do not share it.
The draft that I posted here as “Gödel, Grammar, and Mathematics” became ultimately “On Gödel’s Incompleteness Theorem” in the Journal of Humanistic Mathematics (Volume 15, Issue 2, July 2025, pages 186–221). The published version omits the section called “Physics,” but adds a section that points out what some popular descriptions of Gödel’s theorem overlook:
- there are some complete theories that we can axiomatize;
- this possibility reflects not the strength of the underlying system, but the weakness.
I thought I should finally read Penrose’s popular account. Fortunately, there was an alternative: “Précis of The Emperor’s New Mind,” Behavioral and Brain Sciences, 1990. I read it. I read a little bit from the ensuing thirty-seven commentaries by others, and some of the Author’s Response. I have spent little time with the Continuing Commentary in the same journal, 1993.
The existence of so much commentary shows that we are not simply doing mathematics. Apparently some of the commentators say we are doing “metamathematics.” I would say we are doing logic, because we are studying how we actually reason. I took up the distinction here in “Mathematics and Logic,” where I also sketched the derivation of Gödel’s Incompleteness Theorem from the existence (which I had also shown in “Hypomnesis”) of a non-recursive, recursively enumerable set of numbers. I make those links to remind myself, but shall not rely on them here.
Say I decide mathematical truth by means of a “system,” which amounts to an algorithm. You can show me that I must also have another way to establish truth, and thus my thought is not simply algorithmic.
You do this by giving me a “Gödel sentence” to evaluate. The sentence means that a number with certain properties does not exist. I try to see why.
If there is such a number, I can factorize it as the product of powers of primes:
2a ⋅ 3b ⋅ 5c ⋅ … ⋅ pz
Each of the exponents a, b, c, …, z is nonzero, for this is one of the properties of the original number. Thus each of the exponents is in turn the product of powers of primes. For example,
a = 2a1 ⋅ 3a2 ⋅ 5a3 ⋅ … ⋅ pan.
Thus we get an array of numbers:
| a1 | a2 | a3 | … | an |
| b1 | b2 | b3 | … | bn |
| c1 | c2 | c3 | … | cn |
| ⋮ | ⋮ | ⋮ | ⋱ | ⋮ |
| z1 | z2 | z3 | … | zn |
The only zeros appear at the ends of rows – again this a property of the original number.
The original Gödel sentence says there cannot be such an array. Why not?
You can give me a dictionary that converts each positive number into a character in the language that I use for mathematics. Each row of the array above then becomes a sentence of my mathematical language. Moreover, as it turns out, the list of those sentences constitutes a proof of the Gödel sentence that you gave me in the beginning.
With your help, I have established the existence of a proof of the Gödel sentence, assuming the sentence is false.
Thus, even if the Gödel sentence is false, it is true. Therefore it is true. However, I have established its truth by using the meaning, not just of the sentence itself, but of the number whose existence it denies. This meaning is a proof that the number does not exist. Therefore the number does not exist, and I have proved this nonexistence – but not by means of such a number! Thus my thinking does not simply follow the algorithm mentioned at the beginning.
In “Minds, Machines and Gödel” (Philosophy, 1961), Lucas argues slightly differently:
any mechanical model of the mind must include a mechanism which can enunciate truths of arithmetic, because this is something which minds can do: in fact, it is easy to produce mechanical models which will in many respects produce truths of arithmetic far better than human beings can. But in this one respect they cannot do so well: in that for every machine there is a truth which it cannot produce as being true, but which a mind can.
The argument that I reviewed before treats me as the machine. I still am able to recognize the truth of the Gödel sentence. I conclude that the sentence cannot be based on the whole of my mathematical system, or algorithm. For all I know, I still have one.
Here is how Penrose summarizes Lucas:
We recall that an essential equivalence exists between formal systems and algorithms as procedures for ascertaining the truth of mathematical propositions. Now suppose that a particular mathematician is using some algorithm – that is, in effect, some formal system F as his means of ascertaining mathematical truth. Then the Gödel proposition Pk(k) constructed from F must be a true proposition also, though it is not possible for our putative algorithmic mathematician to ascertain the truth of Pk(k). This is essentially the argument put forward by Lucas (1961), but it is not yet the desired contradiction, since the mathematician can have no means of knowing what F is, let alone be convinced of its validity as a means of ascertaining truth, We shall need a broader argument than this.
I said that I did recognize Pk(k) as true, and I concluded that F could not be my whole system. Logically then, if I do use F exclusively to decide what is true, then I cannot recognize that Pk(k) is true. You can recognize it though.
Here is where the universality of mathematics comes in. I used it in my own argument that you can give me Pk(k) and show me how to derive a proof of it from its negation. As Penrose puts it,
A mathematical argument that convinces one mathematician – provided that it contains no error – will also convince the other, as soon as the argument has been fully grasped.
Penrose would seem to be saying here what I did in “Ethics of Mathematics”:
- There is a social test for mathematical truth.
- It is not sufficient.
He continues a bit later:
The point is that the arguments establishing mathematical truth are communicable.
Thus we are not talking about various obscure algorithms that might happen to be running around in different particular mathematicians’ heads. We are talking about one universally used (putative) formal system that is equivalent to all the different mathematicians’ algorithms for judging mathematical truth. Now this putative “universal” system, or algorithm, cannot ever be known as the one that we mathematicians use to decide truth …
You showed that my algorithm was incomplete, but you couldn’t show me. It was still possible that you actually knew my algorithm. Now the point is that my algorithm is yours as well. Therefore you cannot know it either.
But this flies in the face of what mathematics is all about! The whole point of our mathematical heritage and training is that we do not bow down to the authority of some obscure rules that we can never hope to understand. We must see – or, at least in principle see – that each step in an argument can be reduced to something simple and obvious.
To my thinking, this is as blatant a reductio ad absurdum as we can hope to achieve, short of an actual mathematical proof. The message should be clear: Mathematical truth is not something we ascertain merely by the use of an algorithm.
Some people may hang on Penrose’s admission that we have not got here “an actual mathematical proof.” Maybe there still is an algorithm, but we are doomed never to know all of it. Still, the algorithm would be known by some “oracle,” in the sense of recursion theory. I talked about the concept of an oracle in “Hypomnesis,” which was based on a visit to Delphi. Let us just say that God would have to know our algorithm. Then God would know, as a theorem, that the algorithm was incomplete – or inconsistent. However, any mathematical theorem that God knows, we can know. We cannot know that our algorithm is incomplete, because this very knowledge requires more than our algorithm. Therefore our algorithm must be inconsistent.
If revealed to us then, God’s theorem would establish once for all that mathematics was not just in need of reform, but absurd from the start. We could sit with a silly grin, burning our books, page by page, like Bill Murray in 1984 film version The Razor’s Edge.
At the beginning of Pilgrim at Tinker Creek (1974), Annie Dillard writes of washing off bloody pawprints, left on her chest in the night by her “old fighting tom”:
What blood was this, and what roses? It could have been the rose of mystic union, the blood of murder, or the rose of beauty bare and the blood of some unspeakable sacrifice or birth. The sign on my body could have been a stain or an emblem, the keys to the kingdom or the mark of Cain. I never knew. I never knew as I washed and the blood streaked, faded, and finally disappeared, whether I’d purified myself or ruined the blood sign of the passover. We wake, if we ever wake at all, to mystery, rumors of death, beauty, violence … “Seem like we’re just set down here,” a woman said to me recently, “and don’t nobody know why.”
Harper’s recently drew my attention to how the magazine had published Dillard’s first chapter in October, 1973.
“We wake, if we ever wake at all …” We do wake, sometimes; otherwise we could not be talking about the possibility of waking.
The passage catches my attention, because of what Robert Pirsig writes in chapter 22 of Zen and the Art of Motorcycle Maintenance:
Mathematical solutions are selected by the subliminal self on the basis of “mathematical beauty,” of the harmony of numbers and forms, of geometric elegance. “This is a true esthetic feeling which all mathematicians know,” Poincaré said, “but of which the profane are so ignorant as often to be tempted to smile.” But it is this harmony, this beauty, that is at the center of it all.
Poincaré understood that thinking was not algorithmic, without specific need for Gödel’s theorem. In a new paragraph, Pirsig continues:
Poincaré made it clear that he was not speaking of romantic beauty, the beauty of appearances which strikes the senses. He meant classic beauty, which comes from the harmonious order of the parts, and which a pure intelligence can grasp, which gives structure to romantic beauty and without which life would be only vague and fleeting, a dream from which one could not distinguish one’s dreams because there would be no basis for making the distinction.
Being caught up in appearances, one might well imagine, with Annie Dillard perhaps, that we never really wake from our dreams. Again though, to imagine it is to acknowledge that we do sometimes wake up.
Dillard recognized that the rose-like pawprints on her chest might mean something. In the sentence “A means B,” A is the subject, and B is the object. The prints then are subjective, at least when considered as signs, and then their meaning is objective.
I talked about a Gödel sentence, which said a number with certain properties didn’t exist. I may know what those properties are, subjectively, without knowing the objective meaning, which is that there is a proof of the Gödel sentence. I said something about this in “Subjective and Objective.”
Pirsig brings in objectivity after he finishes the paragraph:
It is the quest of this special classic beauty, the sense of harmony of the cosmos, which makes us choose the facts most fitting to contribute to this harmony. It is not the facts but the relation of things that results in the universal harmony that is the sole objective reality.
One has to check one’s sense of harmony, as I suggested in “Artificial Language.” Pirsig’s next paragraph is some kind of acknowledgment of this.
What guarantees the objectivity of the world in which we live is that this world is common to us with other thinking beings. Through the communications that we have with other men we receive from them ready-made harmonious reasonings. We know that these reasonings do not come from us and at the same time we recognize in them, because of their harmony, the work of reasonable beings like ourselves. And as these reasonings appear to fit the world of our sensations, we think we may infer that these reasonable beings have seen the same thing as we; thus it is that we know we haven’t been dreaming. It is this harmony, this quality if you will, that is the sole basis for the only reality we can ever know.
I must have read that paragraph many times, because I have read Pirsig’s book many times. Perhaps I never thought about it much, until now, when faced with what people call “artificial intelligence.”
It is designed to present us with “the work of reasonable beings like ourselves.” It gives us a pawprint, inked with blood sucked from another human. If an LLM acts like HAL in 2001, this is only because the LLM has “read” the screenplay:
the training set provisions the language model with a vast repertoire of archetypes and a rich trove of narrative structure on which to draw as it ‘chooses’ how to continue a conversation, refining the role it is playing as it goes, while staying in character … a familiar trope in science fiction is the rogue AI system that attacks humans to protect itself. Hence, a suitably prompted dialogue agent will begin to role-play such an AI system.
Sources:
- Shanahan, M., McDonell, K. & Reynolds, L. “Role play with large language models.” Nature 623, 493–498 (2023). DOI:
10.1038/s41586-023-06647-8(this is the original); - Melanie Mitchell, “Why AI chatbots lie to us.” Science 389 (24 July 2025). DOI:
10.1126/science.aea3922(this quotes the latter part of what I have quoted).
The misinterpretation of experience was either discussed or exemplified by Michael Attaleiates, writing about an earthquake, here in what was Constantinople, on September 23, 1063:
one theory of those who investigate earthquakes as natural phenomena was overturned, namely that the tremors are caused at random and without warning by the flow of water in the hollows of the earth and the turbulence of the winds there. For if the motion was caused, as they claim, solely by the violence of those elements as they twist around in the hollows of the earth and create flows of compressed air, then the tremors would not have any order to them and their vast and irrepressible force would not cease at the point of collapse, lest the entire world be subsequently destroyed. On this occasion the tremor was revealed as a sign sent from God, given that the turbulent motion was both large and also orderly, and its purpose was to restrain and control human urges.
Source:
- Michael Attaleiates, The Histories, translated by Anthony Kaldellis and Dimitris Krallis, volume 16 of the Dumbarton Oaks Medieval Library (Cambridge, Massachusetts, and London, England: Harvard University Press, 2012).
I looked the passage also in “Early Tulips” and “Effectiveness.”
Edited November 25, 2025,
- to make corrections:
- (in a quotation:) “may appear to
hebe metaphysical”; - “I said that I did recognize Pk(k)
isas true”; - “you can give me Pk(k) and show me how to derive
from it a proof ofa proof of it from its negation”
- (in a quotation:) “may appear to
- to qualify a categorical assertion: “The former goal
makeswould seem to make clear the side that Santos takes”; - to add
- the forward reference to “The System”;
- the link to the Wikipedia list of earthquakes in Turkey (I added to that page the 1063 earthquake, because it was not there)

2 Comments
I would like to inform you about the paper: T. J. Stępień, Ł. T. Stępień, „On the Consistency of the Arithmetic System”, Journal of Mathematics and System Science, vol. 7, 43 (2017), arXiv:1803.11072, where a proof of the consistency of the Arithmetic System was published. This proof had been done within this Arithmetic System.
The abstract related to this paper: T. J. Stepien and L. T. Stepien, “On the consistency of Peano’s Arithmetic System” , Bull. Symb. Logic 16, No. 1, 132 (2010). http://www.math.ucla.edu/~asl/bsl/1601-toc.htm .
Łukasz T. Stępień
Great. Can you explain where the proof of Gödel’s Second Incompleteness Theorem fails?
6 Trackbacks
[…] “Gödel and AI.” […]
[…] « Prairie Life Gödel and AI » […]
[…] also “Gödel and AI” on this […]
[…] that Gödel’s Incompleteness Theorem relies on the distinction. I shall look at this more in “Gödel and AI.” Meanwhile, the major sources for the present post are the […]
[…] at Yale University are taught it, by the account of Jennifer Frey that I look at also in “Gödel and AI.” However, a tennis player wants more than the feeling of winning; he or she wants to […]
[…] the writing may end up here. I did blog about AI last summer, in “Artificial Language” and “Gödel and AI.” There is always more to […]