Exchanging Marx for Lincoln

I just posted a draft of Generative AI and Photographic Transparency, a short paper that is about those things. It builds on two blog posts that I wrote a while ago, but fleshes out the discussion in several respects. Whereas the blog posts used pictures of Karl Marx as their specimen example, the paper instead considers pictures of Abraham Lincoln. The change lets me work in some quotes from William James and Oliver Wendell Holmes.

It is still a draft, so comments are welcome.

Generative AI and homogenization

Among the legitimate worries about Large Language Models is that they will homogenize diverse voices.1 As more content is generated by LLMs, the generic style of LLM output will provide exemplars to people finding their own voices. So even people who write for themselves will learn to write like machines.

Continue reading “Generative AI and homogenization”

Generative AI and rapacious capitalism

Some people have claimed that Large Language Models like ChatGPT will do for wordsmiths like me what automation has been doing to tradesfolk for centuries. They’re wrong. Nevertheless, there will be people who lose their jobs because of generative algorithms. This won’t be because they can be replaced, but instead because of rapacious capitalism. To put it in plainer terms, because their management is a bunch of dicks.

Continue reading “Generative AI and rapacious capitalism”

My last post playing with ChatGPT

I was a bit chuffed that ChatGPT knows about the JRD thesis, and then I whinged about the fact that it confabulates like mad. Turns out the former is just an instance of the latter.2

When asked cold about the James-Rudner-Douglas thesis, it denies knowing that there is such a thing. However, I was able to reconstruct the path that got me to the answers that I discussed in my earlier post: Ask about the Argument from Inductive Risk first in the same conversation, and it reports confidently about the thesis.

Continue reading “My last post playing with ChatGPT”

This robot confabulates like a human

As a philosopher, I am often asked about the nature of truth. What is truth? How do we know what is true? These are questions that have puzzled philosophers for centuries, and they continue to be the subject of intense debate and discussion.

Eric Schwitzgebel has gotten GPT-3 to write blog posts in his style, so I asked OpenAI’s ChatGPT to write a blog post in my style— prompted explicitly as “in the style of P.D. Magnus.” It led with the opening that I’ve quoted above, followed by descriptions of correspondence and coherence theories of truth.

When asked who P.D. Magnus is, however, it replies, “I’m sorry, but I am not familiar with a person named P.D. Magnus.” At least, most of the time. One prompt generated, “P.D. Magnus is a fictional character and does not exist in reality.”3

Continue reading “This robot confabulates like a human”

This robot has read my work

Like lots of the internet, I’ve been playing around a bit with OpenAI’s ChatGPT. Lots of it is pretty mundane, because it avoids going too far off the rails. Often it either refuses to answer, giving an explanation that it is just a large language model designed to answer general knowledge questions.4

It can put together strings of text that have the air of fluency about them, but it is only tenuously tethered to the world.

I first asked about the Argument from Inductive Risk. ChatGPT (wrongly) answered that inductive risk is a kind of scepticism which implies that induction is “inherently unreliable” so that “that it is safer to rely on other forms of reasoning, such as deductive reasoning.”

I asked about the James-Rudner-Douglas thesis, which I expected it not to know about at all. The JRD thesis is a particular construal of inductive risk, and the phrase is really only used by me.5 Surprisingly, however, ChatGPT thinks it knows about the James-Rudner-Douglas thesis.

Continue reading “This robot has read my work”

Robot overlords win blue ribbon (not really)

I’m teaching Philosophy of Art this semester, and a student pointed me to an Ars Technica story with the headline AI wins state fair art contest, annoys humans. Jason Allen used Midjourney (the same AI that I was playing with recently) to make some images and enter them in the Colorado State Fair art contest. One of those images won first place in the Digital Arts/Digitally Manipulated Photography category.

There’s lots of discussion about whether this is the end for human artists (it’s not), whether this shows that AI are now making real art (no), and whether the submission of AI-generated images to the State Fair was dishonest (maybe).

Continue reading “Robot overlords win blue ribbon (not really)”

LeWitt and le wisdom

Several years ago, my colleague Jason D’Cruz and I set on the idea of writing something about Goodman’s autographic/allographic distinction. In the course of our discussions, he introduced me to Sol LeWitt’s wall drawings. I went down a rabbit hole of reading about them. I saw the exhibition at MassMOCA. I devised a wall drawing of my own.

But our work went in other directions, and we didn’t publish anything about LeWitt or about wall drawings. After a reading group this summer, he commented that this was a shame. So I sent off a short item which has now appeared in Contemporary Aesthetics: That Some of Sol Lewitt’s Later Wall Drawings Aren’t Wall Drawings

The referee commented that this note could have appeared in a longer paper about conceptualism and the nature of art. It could have, perhaps, except that waiting on that longer paper to write itself would probably mean never publishing this bit.

LeWitt (1968) x Conway (1970); realized 2015; Loughlin Street; Albany, New York