One upping the last one

The previous post was about journal referees using chatbots to write referee reports. Via Daily Nous, I learned today that Wiley seems to have used an LLM to generate an abstract for an article in a volume that it published. Sandra Leonie Field, author of the article, notes that bullshit abstract both garbles her argument and misgenders her.

Labour-squandering technology

Australian regulators sponsored a test using generative AI to summarize documents. The soft-spoken conclusion was that “AI outputs could potentially create more work… due to the need to fact check outputs, or because the original source material actually presented information better.”

Coverage of the study leads with the headline: AI worse than humans in every way at summarising information

Parrot progress

Emily Bender famously coined the phrase stochastic parrot to describe text-only chatbots. The trend towards parrots continutes: Ars Technica has a clip of a ChatGPT-4o test run where the bot, which has a canned voice it is supposed to use, replies in the user’s own voice.

OpenAI promises that the actual release version totally doesn’t do this.1

Continue reading “Parrot progress”

AI problem solving, a snapshot

I ask Copilot: “A man and a goat are on one side of the river. They have a boat. How can they go across?”

It replies: “The man takes the goat across the river first, leaving the goat on the other side. Then he returns alone to get the boat and brings it back to the original side. Finally, he takes the goat across the river again. 🚣‍♂️🐐”

Finishing with relevant emoji is very much on-brand for Copilot. In ability to find relevant emoji, it is a match for any human.

On writing and thinking

My forthcoming paper On trusting chatbots is centrally about the challenge of believing claims that appear in LLM output. I am sceptical about the prospects of AI-generated summaries of facts, but I also throw a bit of shade on the suggestion that AI should be used for brainstorming and conjuring up early drafts. Sifting through bullshit is not like editing in the usual sense, I suggest.

Nevertheless, I know people who advocate using chatbots for early drafts of formulaic things like work e-mails and formal proposals. That’s fine, I suppose, but only for the sorts of things where one might just as well find some boilerplate example on-line and use that as a starting place. For anything more original, there’s a real danger in letting a chatbot guide early writing.

Continue reading “On writing and thinking”

ScabGPT

Via Daily Beast and Daily Nous: The administration at Boston University has made a number of tone-deaf suggestions for how faculty can juggle students while their graduate student TAs are on strike. Among these: “Engage generative AI tools to give feedback or facilitate ‘discussion’ on readings or assignments.”

Last year, I wrote that “there will be people who lose their jobs because of generative algorithms. This won’t be because they can be replaced, but instead because of rapacious capitalism. To put it in plainer terms, because their management is a bunch of dicks.”

Dots. Connected.

Imagination, philosophy, and imitation games

Via Daily Nous, I came across a blog post by Justin Smith-Ruiu about creative writing as philosophy. The post is, ultimately, an argument that philosophy can be “incitement of the imagination, by creative means, to see the world in unfamiliar ways.” I agree with that! But there are digressions along the way that range from false speculation to attacks on the kind of philosophy that I (sometimes) do.

Continue reading “Imagination, philosophy, and imitation games”