Doctor gpt

At Daily Nous, there’s discussion of Rebecca Lowe’s post about how great it is to talk philosophy with the latest version of Chat GPT.

There’s pushback in the comments. Others reply that the critics haven’t used the latest version (which is only available behind a paywall). Discussion of LLMs will always allow this: Complaints about their shortcomings are answered by pointing to the next version that’s supposed to resolve all the issues.

Lowe and other commenters reveal that lots of philosophers are using LLMs in the regular day-to-day of their research. I’m still trying to figure out what I think about that. For now, let’s deflect Lowe’s ridiculous offhand claim that “Gpt could easily get a PhD on any philosophical topic.” I say ridiculous for a few reasons—

First, LLM-based chatbots are best in the short run, where their outputs can be nudged along by helpful prompts. Writing a PhD thesis is long form writing.1

Second, Lowe encourages us to “treat [the chatbot] as an equal, get it to role-play.” This approach naturally lulls the user into a pretense that the chatbot is a competent interlocutor. Recall the old-school program Eliza, which users could get quite intimate with when they treated it as an equal.

Third, let’s suppose the LLM did write a qualifying PhD thesis.2 Why should that be a surprise? The model is trained on a great many things, among them PhD theses. Without guardrails to stop it from echoing training data verbatim, it could probably output my PhD thesis.

  1. This holds even for a three-papers thesis.
  2. This would probably be in fits and starts, with sections that a human editor would have to concatenate— but that was worries first and second.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.