Artificial Language

TL;DR: AI writing is like human writing. Of course it is, since its model is human writing. But then what AI produces is like bad human writing.

My sources include Plato, Wendell Berry, George Orwell, E. B. White, William Deresiewicz, Hadley Freeman, Andrew Kay, Kenneth G. Crawford, Hollis Robbins, Yuval Noah Harari, William Egginton, Megan Fritts, and Vi Hart.


About preparing certain seeds for human consumption in an infusion:

For sensory attributes, I’m admittedly Platonic and believe that since coffee is a fruit, it should taste something like a fruit. (And it’s not just any fruit – it’s a cherry!) My roasting philosophy comes from the same conviction. Generally, I’m after bright, juicy, fruity, syrupy goodness.

Thus Caleb Bilgen, founder of Ánimo Coffee Roasters in Asheville, North Carolina.

          ⯅          

From a terrace beneath an awning, a low wall obscured by ivy, oleander, and quince; on the other side, a lawn with a jungle gym; beyond this, a weeping willow and a small white house beneath umbrella pines

What I see as the sun rises
Altınova, Ayvalik, Balıkesir
September 5, 2025

          ⯆          

A coffee fruit may be called a cherry, but the coffee plant is in the genus Coffea; cherry, Prunus. Still, it is good to ask, with Caleb Bilgen, what a thing is trying to be.

Everything material – everything sensible – is formed on a model, by the account of such Platonic characters as Timaeus, in Timaeus 27e–8b (here with the Loeb translation by Bury):

          ⯅          

ἔστιν οὖν δὴ κατ’ ἐμὴν δόξαν πρῶτον διαιρετέον τάδε· Now first of all we must, in my judgement, make the following distinction.
τί τὸ ὂν ἀεί, γένεσιν δὲ οὐκ ἔχον, What is that which is Existent always and has no Becoming?
καὶ τί τὸ γιγνόμενον μὲν ἀεί, ὂν δὲ οὐδέποτε; And what is that which is Becoming always and never is Existent?
τὸ μὲν δὴ νοήσει μετὰ λόγου περιληπτόν, Now the one of these is apprehensible by thought with the aid of reasoning,
ἀεὶ κατὰ ταὐτὰ ὄν, since it is ever uniformly existent;
τὸ δ’ αὖ δόξῃ μετ’ αἰσθήσεως ἀλόγου δοξαστόν, whereas the other is an object of opinion with the aid of unreasoning sensation,
γιγνόμενον καὶ ἀπολλύμενον, ὄντως δὲ οὐδέποτε ὄν. since it becomes and perishes and is never really existent.
πᾶν δὲ αὖ τὸ γιγνόμενον ὑπ’ αἰτίου τινὸς ἐξ ἀνάγκης γίγνεσθαι· Again, everything which becomes must of necessity become owing to some Cause;
παντὶ γὰρ ἀδύνατον χωρὶς αἰτίου γένεσιν σχεῖν. for without a cause it is impossible for anything to attain becoming.
ὅτου μὲν οὖν ἂν ὁ δημιουργὸς πρὸς τὸ κατὰ ταὐτὰ ἔχον βλέπων ἀεί, τοιούτῳ τινὶ προσχρώμενος παραδείγματι, τὴν ἰδέαν καὶ δύναμιν αὐτοῦ ἀπεργάζηται, But when the artificer of any object, in forming its shape and quality, keeps his gaze fixed on that which is uniform, using a model of this kind,
καλὸν ἐξ ἀνάγκης οὕτως ἀποτελεῖσθαι πᾶν· that object, executed in this way, must of necessity be beautiful;
οὗ δ’ ἂν εἰς γεγονός, γεννητῷ παραδείγματι προσχρώμενος, οὐ καλόν. but whenever he gazes at that which has come into existence and uses a created model, the object thus executed is not beautiful.

          ⯆          

As Timaeus goes on to explain, the “whole heaven or cosmos” (πᾶς οὐρανὸς ἢ κόσμος) is the most beautiful thing, ergo modelled on reason (29a). First made were the gods, from Gê and Uranus on down (40e). They were given the job of making the rest of us (41c). So that we humans could survive, the gods gave us “trees, plants, and seeds” (δένδρα καὶ φυτὰ καὶ σπέρματα, 77a–b). These share with us only the last of the three kinds of soul. Those kinds might be described respectively as mind, spirit, and appetite, as in the Republic; in the Timaeus, they are assigned to specific regions of the body – head, chest, and gut, let us say.

          ⯅          

μετέχει γε μὴν τοῦτο ὃ νῦν λέγομεν τοῦ τρίτου ψυχῆς εἴδους, Certainly that creature which we are now describing partakes of the third kind of soul,
ὃ μεταξὺ φρενῶν ὀμφαλοῦ τε ἱδρῦσθαι λόγος, which is seated, as we affirm, between the midriff and the navel,
ᾧ δόξης μὲν λογισμοῦ τε καὶ νοῦ μέτεστιν τὸ μηδέν, and which shares not at all in opinion and reasoning and mind,
αἰσθήσεως δὲ ἡδείας καὶ ἀλγεινῆς μετὰ ἐπιθυμιῶν. but in sensation, pleasant and painful, together with desires.

          ⯆          

Plants are rooted in the ground (77c); we, through our heads, in the heavens (90a–b). Such is the fanciful account of Timaeus. If Caleb Bilgen were thinking specifically of this, he could talk about how the coffee plant was here to serve us.

Myself, I want to observe that if AI can write like a human, this is because it has all of the best writing by humans to use as a model. However, humans also have something else to use as a model: their own thoughts.

The slogan of Ánimo Coffee Roasters is “Coffee is material.” This is

a call to remember coffee’s intricate and fragile material existence – that it takes highly skilled human labor, finite material resources, and increasingly threatened growing conditions to make coffee’s journey from a seed to your morning cup. Our perception of coffee’s value and current systems for distributing that value need to be reformed accordingly. Coffee is material, so it should be culturally and monetarily valued accordingly.

It is good to heed that call. Here I want to look at the words that make it up: would they be uttered in conversation? Perhaps they would, if the speaker was used to them. Bilgen must be such a person. He is described as “an SCA instructor” as well as founder of the roastery in Asheville. Apparently SCA = Specialty Coffee Association.

I have been quoting Bilgen from an interview in the Coffee Compass by Michael Butterworth, whom I know from Twitter as an American resident of Istanbul, like me. Unlike me, he travels around. Now he writes,

I recently caught up with Caleb in Asheville and found myself both challenged and inspired by the way he thinks about coffee.

Butterworth must have been materially in North Carolina, since he reports that Bilgen’s

roast of an Ethiopia, Basha Bekele was one of the most memorable coffees of the summer for me: juicy forest fruits and complex kola nut notes.

I wouldn’t be able to name a “forest fruit”; nor could I identify the smell or taste of a kola nut, except insofar as it is preserved in drinks such as Coca-Cola (which I did enjoy in youth).

Here again is Caleb Bilgen:

A subjective theory of value observes that market value is ultimately determined by the buyer through a mix of factors both intrinsic and extrinsic to a product. While this is insightful for understanding market dynamics, it ignores a basic ontological truth for all commodities – that they require human labor to be brought into existence (labor theory of value).

This is interesting, although I do not know what the adjective “ontological” is doing. The whole passage reads like something written. I found myself wondering whether the interview by Butterworth had been conducted by email.

Emails today can be composed with the help of AI, which now writes well enough that some humans write no better. I suppose this means AI passes a weak form of the Turing test. Still, Hollis Robbins thinks she can explain what the title of one of her essays says. This is “How to Tell if Something is AI-Written” (August 13, 2025):

… if you can’t see anything, if nothing springs to mind, it’s probably AI. I say this even with aphantasia, as someone who, to compensate, has long invested words (signifiers) with extra energy that I can somehow sense without seeing.

By that standard, I think the following could be an example of AI writing:

One of the most critical challenges I have encountered in attempting to obtain meaningful and actionable data from regenerative agricultural experts is the almost intangible and, in some cases, seemingly sacred nature of their work.

Many of these experts appear to operate within an epistemological framework that does not readily translate into quantifiable data or prescriptive methodologies, making it difficult to integrate their expertise into structured, scalable reforestation models. This perspective, whether based on an inherent philosophical resistance to large-scale intervention or a belief in the organic, apparently spiritual and sacrosanct nature of regenerative processes, has resulted in limited practical guidance for the structured implementation of food forests in large-scale reforestation efforts.

That writing is in fact claimed by a fellow I worked with on a farm in West Virginia. Apparently he lives in the Philippines now.

Having encountered food forests only in some videos on YouTube, I have the impression that they require intimate personal knowledge – as does the do-nothing farming of Masanobu Fukuoka, described in One Straw Revolution. I wrote a little about this in “Liberation” (2015), including that Wendell Berry wrote a preface for Fukuoka’s book. Food forests are inherently not “scalable”: this would seem to follow from what Wendell Berry says, in the essay that I wrote about in “Prairie Life,” namely “Conservation Is Good Work”:

Since the sustainable use of renewable resources depends on the existence of settled, small local economies and communities capable of preserving the local knowledge necessary for good farming and forestry, it is obvious that there is no simple, easy, or quick answer to the problem of the exhaustion of sustainable resources. We probably are not going to be able to conserve natural resources so long as our extraction and use of the goods of nature are wasteful and improperly scaled, or so long as these resources are owned or controlled by absentees, or so long as the standard of extraction and use is profitability rather than the health of natural and human communities.

This is pretty abstract too, but I think I know what Berry is talking about. Hidden by quince branches in the photograph near the top of this post is a vegetable stand that supplies our table. Much of what is sold there is grown in the fields behind the stand. When we show up in the summer, the farmer asks whether I am going to be trimming the overgrown fruit trees in our own garden; he probably wants to take the branches to burn in the winter, as he has done in past years. I think we have here something like what Berry means by a “small local economy,” except that Ayşe and I spend most of the year earning our salaries in Istanbul. We are not usually here when our apricots or pomegranates ripen, but perhaps other people come harvest them.

In the previously quoted passage, “critical challenges … meaningful and actionable data … quantifiable data or prescriptive methodologies” – I’m afraid such phrases call Orwell to mind, from “Politics and the English Language” (1946):

A bad usage can spread by tradition and imitation, even among people who should and do know better. The debased language that I have been discussing is in some ways very convenient. Phrases like

  • a not unjustifiable assumption,
  • leaves much to be desired,
  • would serve no good purpose,
  • a consideration which we should do well to bear in mind,

are a continuous temptation, a packet of aspirins always at one’s elbow. Look back through this essay, and for certain you will find that I have again and again committed the very faults I am protesting against. By this morning’s post I have received a pamphlet dealing with conditions in Germany. The author tells me that he “felt impelled” to write it. I open it at random, and here is almost the first sentence that I see:

[The Allies] have an opportunity not only of achieving a radical transformation of Germany’s social and political structure in such a way as to avoid a nationalistic reaction in Germany itself, but at the same time of laying the foundations of a cooperative and unified Europe.

You see, he “feels impelled” to write – feels, presumably, that he has something new to say – and yet his words, like cavalry horses answering the bugle, group themselves automatically into the familiar dreary pattern. This invasion of one’s mind by ready-made phrases

  • (lay the foundations,
  • achieve a radical transformation)

can only be prevented if one is constantly on guard against them, and every such phrase anesthetizes a portion of one’s brain.

If Orwell himself cannot avoid the fault he condemns, perhaps it is not a fault. I agree with him that it is; still, E. B. White seems to recommend it:

Write in a way that comes easily and naturally to you, using words and phrases that come readily to hand.

That’s from “An Approach to Style” in Strunk and White, The Elements of Style (1959). I looked at White’s advice also in “On Knowing Ourselves.” White’s qualification is essential:

But do not assume that because you have acted naturally your product is without flaw.

The use of language begins with imitation … Never imitate consciously, but do not worry about being an imitator; take pains instead to admire what is good. Then, when you write in a way that comes naturally, you will echo the halloos that bear repeating.

Having read the Greats, can you then write what you feel, and forget it? I think you still have to check your work, as does God in Genesis 1, if not the Artificer or Demiurge in the Timaeus.

Some people don’t like checking their work. William Deresiewicz wrote about them in “American education’s new dark age” (UnHerd, March 21, 2022):

Some years ago, I taught a course in public writing … My students … were expensively educated and impressively credentialed …

This must have been before students could have chatbots write their essays. Nonetheless,

They didn’t know how to read; they didn’t know how to write; and they didn’t know how to think.

Such words corroborate my notion that passing a Turing test doesn’t mean a computer can think. Deresiewicz explains:

… I had them read a short piece of writing pedagogy, then handed out a sheet on which I’d reproduced a single sentence from each of their most recent pieces that needed that kind of attention.

In tenth-grade English, Paul Piazza would hand out sheets with student mistakes, which we were supposed to identify; but they were such mistakes as

  • “the reason is because” instead of “the reason is that”;
  • “neither of them are” instead of “neither of them is”;
  • “one of the … that is” instead of “one of the … that are.”

Perhaps I had tended to make the last mistake. Indeed, I was surprised to learn that it was a mistake. However, once the problem was pointed out, I understood it. Thus I do not know whether our lessons were what Robert Pirsig calls, in chapter 15 of Zen and the Art of Motorcycle Maintenance (1974),

the old slap-on-the-fingers-if-your-modifiers-were-caught-dangling stuff. Correct spelling, correct punctuation, correct grammar. Hundreds of itsy-bitsy rules for itsy-bitsy people. No one could remember all that stuff and concentrate on what he was trying to write about. It was all table manners, not derived from any sense of kindness or decency or humanity, but originally from an egotistic desire to look like gentlemen and ladies. Gentlemen and ladies had good table manners and spoke and wrote grammatically. It was what identified one with the upper classes.

In Montana, however, it didn’t have this effect at all. It identified one, instead, as a stuck-up Eastern ass.

At St Albans School for Boys, in Washington, DC, perhaps indeed we were trained to write like gentleman. The explicit aim was, “To write clear, correct, reasonably graceful English,” and the main rule was given thus:

The Most Important Precept of Rhetoric

Every sentence must lend itself to logical analysis.

(Every sentence gotta make sense!)

(Source: Ruge Rules, 1979, compiled posthumously from notes and memories left by Ferdinand Ruge, whom I never knew myself; I see the book is available from a certain giant company.)

William Deresiewicz continues on his own students’ inelegant sentences – of which, unfortunately, he does not give an example:

We set to work on the first, dissecting, pruning, and rewriting. After about ten minutes, we had it in decent shape; it wasn’t graceful yet, but at least it was concise. And then I said, “Okay, it’s taken thirteen of the finest minds in Claremont ten minutes to rewrite that sentence. This is what you need to do with every sentence you write.” They looked at me with horror and amazement. It wasn’t just the scale of the task that was rising before them. It was also the fact that no one had bothered to tell them that before.

If those students did not want to have to check their work, how about checking their privilege? According to Hadley Freeman (“Check your privilege! Whatever that means,” The Guardian, 5 June 2013), the practice has been urged since 2006, if not 1998.

The command “Check your privilege” has become one of the great political rallying cries of 2013 … it is a way of telling a person who is making a political point that they should remember they are speaking from a privileged position, because they are, for example, white, male, heterosexual, able-bodied or wealthy. It is, in other words, a sassy exhortation to acknowledge identity politics and intersectionality …

Perhaps it is one of many exhortations that people are more interested in issuing than following.

As far as I know, many people spend a lot of time in front of a mirror, making sure they look their best; what about reading their best, being read at their best?

I don’t know whether worrying about your words can become an obsessive-compulsive disorder. However, it turns out there is

“Cancel Culture OCD,” a new form of the illness whereby people fixate with terror on the prospect of their own cancellation.

So says Andrew Kay in “Shadow of a Doubt” (Harper’s, July 2025), as I said in “Omniscience.” Again, Kay was reporting on the “twenty-eighth Annual OCD Conference in July 2023,” in San Francisco:

Indeed, I would learn that I was surrounded by attendees living in mute dread of this fate – people who visited on themselves the strangulating self-control that is the essence of OCD but also, increasingly, just a barely conscious feature of being alive in a relentlessly surveilled present. From Rachel Schwartz, a research psychologist at Rogers Behavioral Health, I would learn of an OCD patient who recorded every moment of their waking life with their phone, then watched to see if they’d done anything objectionable. And I had again that creeping thought that had brought me here: OCD sufferers were merely amped-up versions of everyone now, or at least of the credentialed classes I knew so well. Was the pathological behavior of patients like Schwartz’s different in kind or simply in degree from that of my own journalistic peers, Twitter-cowed writers who, starting around 2015, self-monitored like cattle who’d internalized the limits of their electric fencing?

I suppose those “Twitter-cowed writers” worry of being shunned, not for poor style, but for crimethink. I pause to note how my grandfather Kenneth Gale Crawford became a journalist, a hundred years ago, according to his memoir. His father had been a dentist.

By the time I was ready for college, Madge, always seeking greener fields, had nagged Doc into assuming a practice in Racine, Wisconsin, a bigger town. Beloit College was my choice because Jumbo, now Admiral, Sanborn, brother of my best friend, who had died of scarlet fever just before I contracted the same disease, had been a Beloit football player. He had misled the college into believing I might be good enough for the Beloit league. Grandma Gale gave me $1,000 to finance my freshman year and that was riches. After that I scratched out enough to finish in various ways: editing the school paper, which split part of its advertising revenues with the staff, stoking furnaces, working summers driving a Ford truck for a building contractor whose son was a school friend in Beloit, and doing for pay part of the job of buying groceries and doing other business for the Sigma Chi fraternity house. Also by borrowing from the Harmon Foundation.

So my grandfather’s best friend died of infectious disease. Those were the days, and the 26th United States Secretary of Health and Human Services would like to bring them back. (I note that scarlet fever itself is bacterial, and treatable with antibiotics, though resistance may be developing.)

As for the Harmon Foundation, I don’t know whether this could be the William E. Harmon Foundation,

established in 1921 by white real-estate developer William E. Harmon (1862–1928) … best known for funding and collecting the work of African-American artists.

I once had a link to “Breaking Racial Barriers: African Americans in the Harmon Foundation Collection” at the National Portrait Gallery, part of the Smithsonian Institution. The link no longer works. The webpage might be down the Memory Hole, because of advance obedience to the 47th President of the United States. However, the page has been preserved at the Internet Archive. Meanwhile, on the NPG website, one can find individual paintings from the Harmon collection, such as the one below.

          ⯅          

Man with goatee, nearly bald, wearing a suit, looks off into the distance while holding spectacles in his right hand, a paper in his left

W.E.B. Du Bois (1868–1963)
by Laura Wheeler Waring

          ⯆          

My grandfather continues in a new paragraph:

Beloit for me was a magical place, a safe haven and at the same time a window on the world. It had less than 1,000 students in my time. We students were a mix of mature First World War veterans and kids. The Sigma Chi Fraternity, to which I had pledged the spring before enrolling (why I don’t remember), was as important as the college. Its chapter house, situated at a remove of almost a mile from the campus, was a world to itself. Most of the brethren were from Milwaukee and Chicago, pretty sophisticated to my eyes. With their encouragement, I went out for everything – publications, track (after being dismissed from football after the first practice) and campus politics. I was beaten for student president in my senior year but won for president of my class. Now and then I could sell a campus story to Chicago and Milwaukee papers. I was Joe College himself and loved it. I dreaded the day when it would be over.

I was, at best, a mediocre student. But I got something out of a quite remarkable collection of elderly eccentric professors closing out their careers – R. B. Way, a theatrical history teacher; Teddy Wright, an art teacher; Pa Calland, who made Latin interesting if not actually exciting. Roscoe Ellard, an alumnus of the Chicago Daily News, who tried to teach writing, which nobody can, was instrumental in getting me my first job – with the United Press in Chicago, pay $25 a week. It didn’t come through until fall, so I spent the summer as counsellor in a boys’ camp in Minnesota. Having no woodcraft specialty, I became the camp heliotherapist; I told the sun-bathers when to turn over. I also made a lot of good friends who were handy to have when later I was assigned to St. Paul.

One may still need such professors and such friends, while AI makes them harder to find. Hollis Robbins writes, in “The Canary in the Classroom” (August 26, 2025),

… If your experience and learning is largely online, you are going to have a harder time being hired as a person by a person.

… AI automation of the hiring process means students can go through college and onto a job platform without a human vouching for their qualities and abilities, without someone making a phone call saying “yes, this young person will show up, be responsible, learn from you,” which is what an employer really needs.

We have looked now at checking one’s work and one’s privilege. Twenty-four centuries ago, Socrates made the call to check one’s life. In the Apology (38a), he explains why he won’t just go away and shut up:

ὁ δὲ ἀνεξέταστος βίος οὐ βιωτὸς ἀνθρώπῳ.

The unexaminable life is not the livable one for a man.

Here “unexaminable” is usually “unexamined,” but the Greek adjective has the same ending as the one for “livable.” The verbal adjective ἀνεξέταστος is apparently from ἀνεξετάζω, the negation of ἐξετάζω, which is ἐξ + ἐτάζω, the last verb also meaning “examine” and perhaps being cognate with ἐτεός “true.” In his 2010 Greek etymological dictionary, Beekes connects that last adjective with ἔτυμος, the source of “etymology.”

I don’t think a computer can be programmed to live an examined life. The LLM is a vampire, feeding on our words, but not making them any better.

I say that, even though an algorithm can apparently examination possible choices – choices of words, or of moves in a game. This why we can have, in “‘Never summon a power you can’t control’: Yuval Noah Harari on how AI could threaten democracy and divide the world” (The Guardian, August 24, 2024), an excerpt from Harari’s 2024 book Nexus,

both Go professionals and computer experts were stunned in March 2016 when AlphaGo defeated the South Korean Go champion Lee Sedol. In his 2023 book The Coming Wave, Suleyman describes one of the most important moments in their match … It happened during the second game in the match, on 10 March 2016.

“Then … came move number 37,” writes Suleyman. “It made no sense. AlphaGo had apparently blown it, blindly following an apparently losing strategy no professional player would ever pursue … Yet as the endgame approached, that ‘mistaken’ move proved pivotal. AlphaGo won again. Go strategy was being rewritten before our eyes. Our AI had uncovered ideas that hadn’t occurred to the most brilliant players in thousands of years.”

Sorry, I’m not so impressed. The number of plays on a Go board is 19², or 361, minus the number of stones already on the board. Human players cannot examine all possibilities, but a computer can. Yes, the number of possibilities grows exponentially with the number of plays, and dealing with this has apparently been a bigger problem than it was with chess.

If you want to impress me with AI, let it solve the problems identified by Wendell Berry that I brought up in “Prairie Life.”

I said that for AlphaGo, “move number 37” was one of no more than 361 possible choices. For AI, these were not really choices, as William Egginton points out in “Why Kant Wouldn’t Fear ChatGPT-4” (Time, August 29, 2023). I’ve brought up this essay twice before, in “Subjective and Objective” and “On Kant’s Groundwork.” Egginton says,

Unlike us, an algorithm selecting a most-likely next word or a program calculating the best move in a game of chess isn’t choosing and can’t feel regret. It hasn’t chosen because, since its information is the same as its reality, it has already explored all available options; it has already traveled down all roads.

Doubtless machines can and do mislead us into thinking that they are performing such cognitive functions as choosing what to do or say. But only a machine that represents the world in code while at the same time sensing it physically – and that experiences the difference between those two – could be said to be making decisions as opposed to just following instructions.

The difference between “representing the world in code” and “sensing it physically” is what Gödel exploits to prove the Incompleteness Theorem, as I intend on discussing in a future post – mentioned at the head of “Omniscience.”


I have known a couple of people with good English who said they learned it from movies. I was willing to believe them. Thus I believe that one can learn English from AI.

I don’t understand those people who think AI can help students write papers in school. The point is not to produce the papers, but to have the experience of writing them – not the experience of having them written by somebody or something else.

Apparently there is disagreement here, as reported or at least surmised by Megan Fritts in “A Matter of Words” (The Point, May 12, 2025). She is talking about being on two “AI response committees,” particularly the one “composed of faculty in the College of Humanities, Arts, Social Sciences and Education – the programs most affected by the sophisticated large language models (LLMs) that are writing our students’ papers”:

While many on the committee bemoan the increased stress of grading student work, others see opportunities for creative assignments involving human-AI collaboration and benefits for students who speak English as a second language. Many also sense a sort of pedagogical necessity in letting – encouraging, even – students to use AI “responsibly,” believing that mastering high-efficiency LLMs is something students need to learn in college to be ready for the workforce once they leave. Various ideas for “rethinking assessment” are bandied about, eliciting suggestions of video essays, grading based on classroom discussion, assignments that include a list of where and how AI was used, and so on. During one of these discussions, I remember offhandedly remarking, “Sure, but I mean, they still need to learn how to write a paper.” It was not until after the resulting awkward pause that I began to see all of these committee meetings were dancing around the unspoken follow-up question that hung dangling from the end of my statement: “Do they?”

Myself, I support the broader lesson (assuming there is one) to what Plato has the title character say at Timaeus 89a (Loeb translation by Bury):

the best motion of a body is that caused by itself in itself; for this is most nearly akin to the motion of intelligence and the motion of the Universe.

Thus if you are constipated, the best thing is gymnastics; second best, undergoing the motions of a carriage or boat. Taking a laxative would be the worst – according to Timaeus.


I looked above at writing that had the feel of AI. It exemplified what George Orwell called the “invasion of one’s mind by ready-made phrases.” This invasion, he said,

can only be prevented if one is constantly on guard against them [namely, the ready-made phrases], and every such phrase anesthetizes a portion of one’s brain.

Orwell was talking about writing by humans; however, in 1984 (as I recall), he described novels written by machine, for the amusement of the masses.

In “How to Tell if Something is AI-Written” (August 13, 2025), I think Hollis Robbins is really describing features of bad writing, as Orwell was. First comes an account of what AI or at least an LLM does:

Briefly, for humans, language unites a signifier (a word, like “tree”) with a signified (an actual or imagined tree) … Large Language Models (LLMs), however, only operate in the realm of signifiers. There are no signifieds. An LLM generates text through a process called autoregression – it predicts the next word in a sequence based on all the words that came before it …

Robbins discerns a couple of consequences of this. One, we have already seen: “if you can’t see anything, if nothing springs to mind, it’s probably AI.”

I suppose the problem is that the computer cannot do what Orwell describes, again in “Politics and the English Language”:

When you think of a concrete object, you think wordlessly, and then, if you want to describe the thing you have been visualizing, you probably hunt about till you find the exact words that seem to fit it. When you think of something abstract you are more inclined to use words from the start, and unless you make a conscious effort to prevent it, the existing dialect will come rushing in and do the job for you, at the expense of blurring or even changing your meaning.

All of the computer’s thought is abstract in this sense, if the computer is going to be said to think at all.

Since I have already quoted my grandfather on becoming a journalist, let me quote him again on meeting Orwell. Nothing abstract here, it seems to me:

The wait for D-Day was long but not unpleasant. We became well acquainted with our neighbors in Wilton Crescent, whom we encountered at our local pub, The Grenadier, situated at the end of a mews where Wellington’s troop barracks had once been. It was frequented by employees of a nearby hospital and by many movie people, who couldn’t have been less like the Hollywood celebrities. We played darts, exchanged stories about where we were when the last V-2 rocket landed. We drank weak mild and bitter and sometimes went to neighborhood parties. Once a week, when The Grenadier got its one-case quota of Worthington, England’s only really good beer, the publican would close his place – “time gentlemen, please” – and admit us regulars at the back door to drink up the good stuff. The office made few demands on us, though we did work a little. At one point I went on a flying tour with a company of entertainers concerned with morale at U.S. air bases. My part was to make a little speech about the purposes of the war. In London I got to know Orwell; Douglas, the author of South Wind, one of my favorite books; and several MPs. The British were magnificently brave and friendly, all of them in the same leaky boat. A German rocket might land anywhere at any time. The Blitz with manned bombers was over by now but the V-1 buzz bombs took their place and the V-2’s that came next were worse. One couldn’t hear them coming. A building was wrecked or a great hole dug in the pavement as if by evil magic.

I do wonder what my grandfather actually said about the purposes of the war. Perhaps he thought they were sufficiently clear to us readers. Why though did he make “purposes” plural? What did he think there was, beyond defeating fascism?

Ayşe and I visited the Grenadier with my mother, some time in the aughts. The bartender seemed excited to think we were making some kind of pilgrimage, but we didn’t see it that way. I’m not sure my mother would have known about the Grenadier, had I not pointed out the memoir passage above.

After my uncle graduated from Beloit, his father said something like, “Well Bill, I don’t know what you want to do now, but I could probably get you a job at a newspaper.” I think Bill took the offer. He ended up in television news. I guess this involved writing and editing words that others would read on the air. As I understood from him once, if you are going to write about people, you should not just name them, but tell us something they said.

Had he told us something that Orwell or Douglas said, this might have gummed up the narrative of my grandfather. He does quote a more pertinent exchange in his next paragraph, from the eve of D-Day:

… When the time came the Navy gave us a big going-away party in Plymouth and I was put aboard an assault transport ship manned by the Coast Guard. I was feeling no pain and feisty. “Why not a first wave assignment?” I asked the captain. “No reason,” he said. “I’ll give you a first-wave boat.” He did and I woke up the next morning slightly hung-over and regretting my chestiness of the night before.

We noted earlier that the chest was the seat of spirit in the Timaeus.

According to Cornelius Ryan in The Longest Day (which I have not actually read any more of than this),

The great square-faced ramps of the assault craft butted into every wave, and chilling, frothing green water sloshed over everyone. There were no heroes in these boats – just cold, miserable, anxious men, so jam-packed together, so weighed down by equipment that often there was no place to be seasick except over one another. Newsweek’s Kenneth Crawford, in the first Utah wave, saw a young 4th Division soldier, covered in his own vomit, slowly shaking his head in abject misery and disgust. “That guy Higgins,” he said, “ain’t got nothin’ to be proud of about inventin’ this goddamned boat.”

Having gummed up my own narrative with family reminiscences, but also with what would seem to be good human writing, although some of the goodness may be inherited from the thrill of what it describes – after this, I return to how Hollis Robbins distinguishes AI writing. She has another rule:

look for formulations like “it’s not just X, but also Y” or “rather than A, we should focus on B.” This structure is a form of computational hedging. Because an LLM only knows the relationships between words, not between words and the world, it wants to avoid falsifiable claims. (I’m saying want here as a joke but it helps to see LLMs as wafflers.) By being all balance-y it can sound comprehensive without committing to anything …

Again, you may notice that no specific idea or image or word springs to mind when you’re reading AI-generated words. You can see this in phrases like

  • “It’s not only efficiency that matters, but also stakeholder engagement.”
  • “Rather than focusing on obstacles, we should embrace transformative opportunities.”
  • Effective collaboration requires not only interpersonal care but also the strategic navigation of team synergy.

Here are such ready-made phrases (“stakeholder engagement, transformative opportunities”) as Orwell warned against. He urged us to be “constantly on guard” against them. William Deresiewicz wanted his students to revise every sentence they wrote, while E. B. White took a more relaxed line:

Write in a way that comes easily and naturally to you …

Still, the natural was not the same as the good. To be able to write naturally, or at least to get away with it, you needed to know by experience what was good.

The same goes for AI:

“AI” needs an example of what “good” looks like before it can try to produce “good” writing.

So says Jack Apollo George, one of twenty thousand people said to be working full time, just to provide new writing that LLMs can be “trained” on. His essay is “The write stuff: How human scribes are fuelling AI” (The Guardian Weekly, 13 September 2024; the online version was, “‘If journalism is going up in smoke, I might as well get high off the fumes’: confessions of a chatbot helper,” 7 September 2024).

Thus AI firms are hiring humans to write stuff to train AI to write stuff so that humans need not be hired to write it.

We humans have always trained one another, but AI cannot train itself:

indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in LLMs as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs) … the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content in data crawled from the Internet.

Source: Shumailov, I., Shumaylov, Z., Zhao, Y. et al. “AI models collapse when trained on recursively generated data.” Nature 631, 755–759 (2024). DOI: 10.1038/s41586-024-07566-y

Jack Apollo George provides that source, with amplification:

Shumailov explained to me that each time a model is trained on synthetic data, it loses awareness of the long tail of “minority data” that it was originally trained on (rare words, unusual facts etc). The breadth of knowledge is eroded and replaced by only the most likely datapoints – LLMs are at their core sophisticated text-prediction machines.

Orwell’s figures of speech, based on “a packet of aspirins always at one’s elbow” and “cavalry horses answering the bugle” – they will be among the lost minority data, at least if Orwell’s own rules are followed:

  1. Never use a metaphor, simile or other figure of speech which you are used to seeing in print.
  2. Never use a long word where a short one will do.
  3. If it is possible to cut a word out, always cut it out.
  4. Never use the passive where you can use the active.
  5. Never use a foreign phrase, a scientific word or a jargon word if you can think of an everyday English equivalent.
  6. Break any of these rules sooner than say anything barbarous.

The computer cannot really follow the last rule. Orwell has prefaced his advice by saying,

one needs rules that one can rely on when instinct fails. I think the following rules will cover most cases …

Instinct fails, not by telling you nothing, for then you could still have a special emergency rule. Instinct fails by telling you the wrong thing. Orwell gives two examples in his essay. First, he does not like how

the English flower names which were in use till very recently are being ousted by Greek ones, snap-dragon becoming antirrhinum, forget-me-not becoming myosotis, etc. It is hard to see any practical reason for this change of fashion: it is probably due to an instinctive turning-away from the more homely word and a vague feeling that the Greek word is scientific.

Also, “political speech and writing are largely the defense of the indefensible,” and,

When there is a gap between one’s real and one’s declared aims, one turns, as it were instinctively, to long words and exhausted idioms, like a cuttlefish squirting out ink.

The computer then runs entirely on instinct. This is not good enough.

Vi Hart made a point like this, way back before the Covid-19 pandemic, in “Changing my Mind about AI, Universal Basic Income, and the Value of Data” (May 30, 2019):

Content moderation is a complex task. A pure computer program alone cannot reliably identify whether a post is hate speech or not. Language is subtle and fluid; as posts get flagged people change the way they talk both naturally and purposefully to get around the automated system. People are continually inventing new symbols, metaphors, images, and code words that can only be flagged by an AI if that AI gets new data from human beings who can provide labeled examples of how things have changed. And because this has to be done in real time in order for social media giants to stay family friendly, the very architecture includes asking human beings what to do. An engineer is paid a lot to create a system that allows an unpaid user to flag content, automatically sending it to someone else who makes below minimum wage to make a judgement call.

Humans are always coming up with new things, the way an algorithm cannot. I accept this consequence of Gödel’s Incompleteness Theorem, as identified by Roger Penrose, but that is for another post.

Edited September 27, 2025

One Comment

  1. Unknown's avatar Anonymous
    Posted September 6, 2025 at 8:59 pm | Permalink | Reply

    Hey David. I am (once again) teaching a first-year writing seminar here in Claremont (the scene of the crime for Deresiewicz). LLM/AI challenges in the classroom are getting more intense by the day, and many professors are losing their minds. I really enjoyed this piece and learned from it. Thanks! Andre

4 Trackbacks

  1. By Gödel and AI « Polytropy on September 9, 2025 at 4:59 pm

    […] « Artificial Language […]

  2. By The System « Polytropy on September 16, 2025 at 9:16 am

    […] up the Orwell essay that I did in two recent posts, “Prairie Life” (September 3, 2025) and “Artificial Language” (September 6, […]

  3. By Omniscience « Polytropy on September 27, 2025 at 10:39 am

    […] “Artificial Language,” […]

  4. By Prairie Life « Polytropy on September 27, 2025 at 10:43 am

    […] « Omniscience Artificial Language » […]

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.