On writing and thinking

My forthcoming paper On trusting chatbots is centrally about the challenge of believing claims that appear in LLM output. I am sceptical about the prospects of AI-generated summaries of facts, but I also throw a bit of shade on the suggestion that AI should be used for brainstorming and conjuring up early drafts. Sifting through bullshit is not like editing in the usual sense, I suggest.

Nevertheless, I know people who advocate using chatbots for early drafts of formulaic things like work e-mails and formal proposals. That’s fine, I suppose, but only for the sorts of things where one might just as well find some boilerplate example on-line and use that as a starting place. For anything more original, there’s a real danger in letting a chatbot guide early writing.

Decades ago, when applying for a teaching job, I wrote this:

It is common for students, after having turned in an assignment which woefully misses the mark, to say that they understand everything up in their head and that it’s only that their understanding didn’t show up in their writing. Sometimes this may be so; perhaps they were distracted while trying to write the paper, and under different circumstances they could have done better. More often, their excuses are disingenuous. The fact that they are unable to express themselves on the topic is a sign that they really don’t understand. Their subjective certainty that they understand is different from real understanding.

I offer this observation because it shows that trying to write on a topic is an important way to discern whether or not we really understand it. Often enough, we begin with a murky understanding and only achieve clarity in the course of setting our ideas down in words. For that reason, learning to write is tantamount to learning to think.

It’s a bit wordy, but I still think it’s true. And the point is even more pressing when the question is one where several reasonable answers are possible. Writing through a complicated topic is tantamount to thinking it through. The answer one ends up with— what one ends up believing— will be shaped by the argument one constructs. Outsourcing the writing is outsourcing the thinking.

As a counterpoint, one might cite Terrence Tao’s suggestion that a mathematician should be able to provide a proof-sketch to an AI so as to generate a LaTeX file for the resulting scholarly paper. Two important rejoinders: First, a key part of his suggestion is that AI should also generate a formalization of the proof which could be verified by an algorithm for checking proofs. That further algorithm would not be an LLM. Second, this is less tempting an offer in disciplines like philosophy. Outside of certain sub-disciplines, having a valid formal proof is not the primary thing one does in a philosophy paper. Even when it is part of what happens, there is crucial work in interpreting the terms of the formal proof so as to draw substantive conclusions. And many times the thesis of the paper itself changes in the course of fully articulating the argument for it.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.