Machinations

Sources for this post include the following.

  • On recent events in the US:

    1. Seth Masket, “Friday Night Musk-acre” (February 1, 2025).
    2. Olga Lautman: “Why has Musk gained access to our data?” (February 2, 2025).
    3. Timothy Snyder, “The Logic of Destruction: And how to resist it” (February 2, 2025).
    4. Heather Cox Richardson, “February 2, 2025.”
    5. Malcolm Nance, “In The Trump ‘White’ House: No Spies Matter” (February 7, 2025).
    6. “An Uproar as Trump and Musk Wreak Havoc” (New York Times, letters, February 7, 2025).
    7. Elad Nehorai, “Elon Musk Isn’t a White Nationalist. He’s a White Globalist” (February 7, 2025).
  • On technological fantasies and what they may do to students:

    1. Michael Townsen Hicks, James Humphries, and Joe Slater, “ChatGPT is bullshit” (2024).
    2. John Warner, “AI Boosters Think You’re Dumb” (February 2, 2025).
    3. Seth Bruggeman, “A Crisis of Trust in the Classroom” (January 14, 2025) – students either cheat with technology, or do little of anything.
    4. Robert Pirsig, Zen and the Art of Motorcycle Maintenance (1974) – perhaps students motivated only by grades should drop out.
    5. Steve Rose (interviewer), “Five ways AI could improve the world: ‘We can cure all diseases, stabilise our climate, halt poverty’” (Thu 6 Jul 2023) – Ray Kurzweil thinks “Our mobile phone … makes us more intelligent,” and since we already have nukes, AI is “not really making life more dangerous”; anyway, “More intelligence will lead to better everything.”
    6. Rachel Uda, “In Such a Connected World, Why Are We Lonelier Than Ever?” (February 6, 2023).
    7. Hanna Rosin interviewing Jonathan Haidt, “The Smartphone Kids Are Not All Right” (March 21, 2024).
  • On a particular fantasy of effortless learning:

    1. Wikipedia, “Decoded neurofeedback.”
    2. Adam Hadhazy, “Science Fiction or Fact: Instant, ‘Matrix’-like Learning” (June 21, 2012).
    3. Takeo Watanabe and others, “Perceptual Learning Incepted by Decoded fMRI Neurofeedback Without Stimulus Presentation” (9 December 2011).
    4. Kevin Le Gendre, “Steel pan virtuoso Leon Foster Thomas: ‘Some people don’t think it’s a serious instrument’ ” (February 24, 2023).
  • Works leading me, somehow, to all of that:

    1. Northrop Frye, The Double Vision (1991).
    2. Peter Jukes, “In a rare interview, Philip Pullman tells us his own origin story, and why the great questions are still religious ones” (13 January 2014).

Towering over tourists are stone figures that have “the body of a bull, wings of an eagle, and the crowned head of a bearded men”
At Persepolis, outside Shiraz, Iran, Tuesday, September 4, 2012, this is the Gate of Xerxes – the Xerxes whose failed invasion of Greece is recounted by Herodotus


“Machine learning” is a bad metaphor – I suggested that last time. Now I would link it to the machinations coming into view:

Elon Musk appears to be trying to do to the federal government what he did at Twitter/X: massively disrupt its functioning and drive out experienced employees not on board with his transformations and his personality cult.

Thus Seth Masket, “Friday Night Musk-acre” (February 1, 2025). Claire Berlinski at the Cosmopolitan Globalist sent that analysis out to her followers, and then sent “Why has Musk gained access to our data?” (February 2), by Olga Lautman:

I don’t want to be an alarmist and was not planning on writing anything tonight, but this is insanely dangerous. Elon Musk, an unelected billionaire with no oversight, has gained access to the federal government’s payment system. Even worse, this is happening through the unofficial, unauthorized, fake DOGE, and a group of Musk’s handpicked associates. How did these people even get security clearances? DOGE is not a real government agency, yet it has been given the power to monitor and potentially block federal payments, including Social Security checks, Medicare reimbursements, federal employee salaries, and tax refunds.

In her letter of February 2, 2025, Heather Cox Richardson cites Masket’s letter, along with “The Logic of Destruction: And how to resist it” (February 2), where Timothy Snyder writes,

The parts of the government that work to implement laws have been maligned for decades. Americans have been told that the people who provide them with services are conspirators within a “deep state.” We have been instructed that the billionaires are the heroes.

All of this work was preparatory to the coup that is going on now.

“All of this work” – all of these machinations.


Update, February 8, 2025. Why are they doing it? I’m afraid the explanation is indeed the kind of racism that motivated Hitler. This is an awful accusation, but I think it must be made. If it is recognized, then, even though there is little first-hand memory of Nazi Germany left, perhaps second-memories can motivate enough people to resist – if indeed they don’t think Hitler had good ideas.

Writing “On Dialectic,” I noted how

  • Kendall Hailey (in The Day I Became an Autodidact, 1988) had the impression that Plato was “no more than an ancient Hitler,” while
  • Ellen Ullman (in Life in Code: A Personal History of Technology, 2017) found latter-day Hitlers among men who thought they were “real techies.”

I can now supplement the sources above with the following, all dated February 7, 2025. First is Malcolm Nance, “In The Trump ‘White’ House: No Spies Matter”:

In a stunning display of what I called WEI, White equity, and inclusion, Trump used his incompetent, unqualified, drunken frat bro Pete Hesgeth to issue orders that would essentially turn the United States Department of Defense into a white supremacist organization.

In one of the letters in the New York Times under the title, “An Uproar as Trump and Musk Wreak Havoc,” Charles Llewellyn of Beaufort, N.C., writes,

I was a commissioned Foreign Service officer with U.S.A.I.D. from 1986 to 2010, working on health programs in six countries … I was not a member of “a criminal organization,” as claimed by Elon Musk.

Mr. Musk has a personal vendetta against U.S.A.I.D. He rightly claims that U.S.A.I.D. helped overthrow the apartheid government of his native South Africa. Now, he is extracting his revenge, enabled by President Trump. Unfortunately, they are destroying an incredibly important institution of America’s foreign policy.

Finally, Elad Nehorai, “Elon Musk Isn’t a White Nationalist. He’s a White Globalist”:

If you know even a little about Elon Musk, you’ll know he is obsessed with birthrates … Among the many head-spinning changes we’ve seen him take with his work, the most shocking and horrific is his attempt to completely shut down USAID … Children, in particular, are some of the most protected by USAID … So why would Elon Musk want to kill hundreds of thousands if not millions of children?

If you know anything about Musk, the answer is quite simple: he is only concerned with white children and white birth rates … Rather than only trying to increase birth rates of white people and jail brown/black people/immigrants, he is also trying to reduce the populations of brown and black countries … If we were to evolve our definitions and discussions, Musk would not be termed a white nationalist. He would be a white globalist.


Again, as Timothy Snyder reports, “Americans have been told that the people who provide them with services are conspirators within a ‘deep state.’ ” Everybody has been told as well that there is such a thing as “machine learning.” I’m thinking now that the metaphor is not just poorly chosen, but not even chosen in good faith. It is not exactly a lie, since it does make some sense as a metaphor. Still, in a word, it is bullshit.

ChatGPT is bullshit” – that’s a plausible assertion and the title of a paper in Ethics and Information Technology (2024), where Michael Townsen Hicks, James Humphries, and Joe Slater say,

we argue against the view that when ChatGPT and the like produce false claims they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bullshitting, in the Frankfurtian sense … Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.

I don’t know why the writers have to refer to “text that looks truth-apt,” introducing a new technical term, when they could just talk about, say, ostensibly declarative sentences. In any case, it seems appropriate to classify promoters of AI, together with the current US President, as bullshitters.

The idea is corroborated by John Warner in his latest newsletter, “AI Boosters Think You’re Dumb” (February 2, 2025). Apparently Reid Hoffman (a co-founder of LinkedIn) promotes a sorites (an extended syllogism) whose premisses are:

  • Imagination is good.
  • Hallucinating is imagining.
  • AI hallucinates.

You see the conclusion: AI is good. Warner points out,

Hoffman, who knows that large language models run on probabilistic processes, knows what he is saying is fundamentally wrong, but he is counting on his audience not knowing this in order to sell his vision for the future, a vision which suggests we should live lives entirely mediated by artificial intelligence.

It seems to me such a vision can only encourage what another teacher observed in his students:

They’d drift in and out of the classroom. Many just stopped showing up. Those who did were often distracted and unfocused. I had to ask students to stop watching movies and to not play video games. Students demanded time to talk about how they were graded unfairly on one assignment or another but then would not show up for meetings. My beleaguered TAs sifted through endless AI-generated nonsense submitted for assignments that, in some cases, asked only for a sentence or two of wholly unsubstantiated opinion. One student photoshopped himself into a picture of a local museum rather than visiting it, as required by an assignment.

If what the students are supposed to be doing can be done by machine, then why should they themselves even bother?

The last quotation is from “A Crisis of Trust in the Classroom” (Inside Higher Ed, January 14, 2025), by Seth Bruggeman, professor of history at Temple University. Inevitably, I liken his words to some of Robert Pirsig’s in Zen and the Art of Motorcycle Maintenance (1974):

The student’s biggest problem was a slave mentality which had been built into him by years of carrot-and-whip grading, a mule mentality which said, “If you don’t whip me, I won’t work.”

The student here is an imaginary one who drops out of class when there is no penalty for not completing assignments. For Bruggeman,

college instructors are contending with dire problems related to how a rising generation of students understands learning … problems concerning citizenship, identity and the commodification of everything. They reflect a collapse of trust in institutions, knowledge and the self.

That’s pretty vague, but Bruggeman gets more precise: “Students do not know how to read.”

My wife and I have been seeing suggestions of that for some years, though in another country (Turkey) and another subject (mathematics). When our students do want to learn more about something, rather than texts to read, they may seek out videos to watch, if not special lectures by us.

Would they like to learn by “decoded neurofeedback”? According to some old experiments, when you shut a cat, dog, or chicken in a cage whose latch it can undo, if only it can work out how, then it does work out how, just by flailing about. What if it could be learning something else, unwittingly, at the same time? That is what “decoded neurofeedback” is supposed to accomplish, as far as I can tell. If the animal is replaced with a human, and the goal is not getting out of a cage, but making a dot bigger using thought alone, then that thought can be tricked into identifying one of three orientations of a “Gabor patch.” That’s what I infer from piecing together what is said in

Well, the Wright brothers did not fly very far in their first airplane.

The animal experiments were by Edward Lee Thorndike, published in 1898; I mentioned them when taking up the first two waves in Republic Book V.

Some time before writing “On Being Given to Know,” I had read Hadhazy’s article. What had stayed with me from it was a quotation from Takeo Watanabe, then of Boston University, now of Brown. Alluding to The Matrix (which I watched once), Hadhazy observes,

Nowadays mastering a style of kung fu takes thousands of hours of practice. But there are some emerging hints that the pace of learning a skill can be technologically boosted.

At the end, he says,

Despite all these obstacles, Watanabe is optimistic. He thinks scientists, using techniques similar to decoded neurofeedback, soon will be able to essentially delete a person’s unwanted, traumatic memories and enhance learning.

Kung fu, however, is not at the top of Watanabe’s own priority list. For one thing, he says, it trails guitar playing, for one thing. “I wish I could play like Jimmy Page of Led Zeppelin.”

There’s one way to get to Carnegie Hall, or in this case, Madison Square Garden: practice. I think that’s basically what I tried to say in the earlier post. One could think otherwise, if led to believe machine learning were possible.

Look at what is said of and by a musician who emulates others while finding his own voice:

Thomas foregrounds his own material rather than covers, and Calasanitus reveals his versatility above all else. The album has a fair amount of rhythmic fire, but Thomas also plays with a flickering lyricism that reflects deep immersion in the music of Miles Davis and Keith Jarrett, two of the artists who have brought a notable poetic gravitas to the jazz canon.

“I tried to emulate him in my playing,” Thomas says of Jarrett, the pianist whose renowned Standards Trio has been a major source of inspiration. “It gave me a great understanding of what I needed to do, how I needed to sound,” says Thomas. “I wasn’t looking at things from a steel pan point of view, just a musical point of view, as I want to make sure that when I speak through the pan it’s what I want to say and how I want to say it …”

Thus Kevin Le Gendre in “Steel pan virtuoso Leon Foster Thomas: ‘Some people don’t think it’s a serious instrument’ ” (The Guardian, February 24, 2023). Would Thomas imagine decoded neurofeedback could help him?

Adam Hadhazy says,

Learning is a tedious process – ask any calculus student, or athlete training for the Olympics. Repetition of a task, whether solving math problems or pole-vaulting, gradually instills long-term mental and muscle memory.

A study published late last year suggests how this learning process might be amplified, and without the learner even being aware of it.

First of all, had you asked me as a calculus student, I would not have said the subject was tedious. I don’t know about training for the Olympics, but had I found it tedious to ride a unicycle, I never would have learned. After working at an organic vegetable farm, I pulled weeds from lawns instinctively, so I guess I had learned something from going down rows of plants with a hoe. Perhaps the work was tedious; on the other hand, I had to try to do my job properly. It wasn’t mechanical.

I don’t know what one learns on an assembly line, but I guess Charlie Chaplin made fun of it in Modern Times. So something seems off to me, with the metaphors people use for learning.

In the last post, “Removal,” I quoted Northrop Frye (1912–91) from The Double Vision (1991) as saying that in literature,

the organizing principles are myth, that is, story or narrative, and metaphor, that is, figured language.

I investigated the related myths whereby

  • consciousness has evolved through the mechanisms recognized by Darwin;
  • machine learning and artificial intelligence are possible.

I was going to add the following to the post, but then decided to make a new post – this one. Philip Pullman seems to accept the myth about evolution, while recognizing that there must be more to the story. This is from an interview published in Aeon, January 13, 2014:

‘I like to say I’m a complete materialist but …’ Pullman allows himself an English teacher’s dramatic pause, ‘matter is conscious. How do I know that? Because I’m matter and I’m conscious.’ Once again, Pullman opts for complexity and nuance, and you can hear the same dislike of hierarchies in his critique of some popular science. ‘What you often get in people of this stripe (and Brian Cox – the TV physicist – goes in for it as well), is a sentence of the formula “X is no more than/just/merely/nothing but Y.” For example: “The world is nothing but the action of molecules” or “Love is merely the movement of electrons in the brains.” Sentences of that sort are nearly always mistaken,’ says Pullman. ‘I would prefer they were put in the form of “Love is a movement of electrons in the brain, among other things.”’

Another one of the sentences that “are nearly always wrong” is “Love is just a chemical reaction”; Edward Frenkel gave it (and expressed sorrow for whoever said it) when speaking at Boğaziçi University on December 3, 2015, to promote Love and Math. Frenkel was sorry also for Ray Kurzweil, as I said in (or in the back matter of) the post already mentioned, “On Being Given to Know.”

I noticed recently that Kurzweil had contributed to the “Good News” section of a collection of short essays about AI in the Guardian Weekly (14 July 2023); edited by Steve Rose, that section is online as “Five ways AI could improve the world” (the “Bad News” part is “Five ways AI might destroy the world”). According to Kurzweil:

We merge with our machines. Ultimately, they will extend who we are. Our mobile phone, for example, makes us more intelligent and able to communicate with each other.

That sounds like somebody with a blind spot, to say the least; otherwise, again, it’s bullshit. Perhaps Kurzweil thinks Bart Simpson really did solve the problem of hearing one hand clapping. In “Impermanence,” on Nicomachean Ethics IX.i–iii, I suggested Aristotle would think the same.

I wonder how Kurzweil responds to writing on the theme of, “In Such a Connected World, Why Are We Lonelier Than Ever?” (February 6, 2023). A quick search yielded that essay, by Rachel Uda, on a site called Katie Couric Media, which I’m afraid looks as if some of its “content” could by produced by AI. I thought the same of a site called Independent Femme, which I talked about in “Femininity,” on Iliad Book XIV, where Hera uses her feminine wiles against Zeus.

Uda refers to Jonathan Haidt, whose interview by Hanna Rosin, “The Smartphone Kids Are Not All Right” (The Atlantic, March 21, 2024) I had already saved and read:

Hanna Rosin
I have a child who would say they were addicted but also would say that online is where they found their friends and where they found people who shared their interests, and that’s something they couldn’t do in real life …
Haidt
That’s right. It greatly increases the quantity of social interactions. That’s true. And it greatly decreases their quality. But here’s the thing: If you were right – that it’s opening up all these possible social relationships, that it’s doing all this good – if you were right, then loneliness should have gone down in the 2010s, and it didn’t. It goes up like a hockey stick.

I am suspicious of measuring a person’s loneliness as if it were height or weight. The same goes for intelligence. Even then, it doesn’t seem likely that people do better on IQ tests, once they start using mobiles habitually. If Kurzweil would say, “I didn’t mean that,” then so much the better, perhaps. In any case, he continues:

If the wrong people take control of AI, that could be bad for the rest of us, so we really need to keep pace with that, which we are doing. But we already have things that have nothing to do with AI, such as atomic weapons, that could destroy everyone. So it’s not really making life more dangerous …

We’re already taking a big risk with nuclear weapons, so we might as well take risks with AI as well? Kurzweil is as naïve as Glaucon and Diomedes, who implicitly believe themselves qualified to breed humans, so as to produce the best rulers for the Callipolis proposed by Socrates in the Republic.

The rate of change will be difficult for some people …

We’ve made great progress but there are still people who are desperate. More intelligence will lead to better everything. We will have the possibility of everybody having a very good life.

Some of us think it’s a hard problem, what the good life is. Luckily for us, Kurzweil has solved it.

Two humans appearing as tall as the columns far behind them under a clear blue sky
The Gate of Xerxes behind author and spouse, September 4, 2012

Edited February 5 and 8 and March 12, 2025

3 Trackbacks

  1. By On Being Given to Know « Polytropy on February 4, 2025 at 2:56 am

    […] added the horizontal line separating this afterword from the main body, because, while composing “Machinations,” which refers back here, I had read the whole post in the lynx browser, where I could not see […]

  2. By Removal « Polytropy on February 4, 2025 at 4:39 am

    […] « Subjective and Objective Machinations » […]

  3. By Order from Chaos « Polytropy on February 9, 2025 at 12:18 pm

    […] « Machinations […]

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.