Labour-squandering technology

Australian regulators sponsored a test using generative AI to summarize documents. The soft-spoken conclusion was that “AI outputs could potentially create more work… due to the need to fact check outputs, or because the original source material actually presented information better.”

Coverage of the study leads with the headline: AI worse than humans in every way at summarising information

The tangled web

In which I find myself unironically missing old, hard-copy Yellow Pages.

I came into the possession of a vintage sport coat which was in excellent condition except for several strata of dust on the shoulders, from hanging unused but uncovered for decades. The care instructions say dry clean only, so I went looking for a dry cleaner. The internet suggested there were several near me. On further examination, however, one was shuttered up. Another had remodeled and become just a regular laundromat.

Continue reading “The tangled web”

Engines of enshittification

Via Ars Technica, I’ve learned that shady Amazon sellers have been using chatbots to automatically write item descriptions. The result is hot offers on items like “I cannot fulfill that request” and “I apologize but I cannot complete this task.” This is a natural progression from Amazon product listings which were simply misdescribed by humans.

Continue reading “Engines of enshittification”

It took me years to write it

Fifteen years ago, I conducted a small study testing the error-correction tendency of Wikipedia. Not only is Wikipedia different now than it was then, the community that maintains it is different. Despite the crudity of that study’s methods, it is natural to wonder what the result would be now. So I repeated the earlier study and found surprisingly similar results.

That’s the abstract for a short paper of mine that was published today at First Monday. It is a follow-up to my earlier work on the epistemology of Wikipedia.

Continue reading “It took me years to write it”

Exchanging Marx for Lincoln

I just posted a draft of Generative AI and Photographic Transparency, a short paper that is about those things. It builds on two blog posts that I wrote a while ago, but fleshes out the discussion in several respects. Whereas the blog posts used pictures of Karl Marx as their specimen example, the paper instead considers pictures of Abraham Lincoln. The change lets me work in some quotes from William James and Oliver Wendell Holmes.

It is still a draft, so comments are welcome.

Generative AI and homogenization

Among the legitimate worries about Large Language Models is that they will homogenize diverse voices.1 As more content is generated by LLMs, the generic style of LLM output will provide exemplars to people finding their own voices. So even people who write for themselves will learn to write like machines.

Continue reading “Generative AI and homogenization”

Generative AI and rapacious capitalism

Some people have claimed that Large Language Models like ChatGPT will do for wordsmiths like me what automation has been doing to tradesfolk for centuries. They’re wrong. Nevertheless, there will be people who lose their jobs because of generative algorithms. This won’t be because they can be replaced, but instead because of rapacious capitalism. To put it in plainer terms, because their management is a bunch of dicks.

Continue reading “Generative AI and rapacious capitalism”

Preserving the presence of the past

Before I started this blog, I posted for more than a decade at Footnotes on Epicycles. The blog software I was using was somebody’s indie programming project, and they had stopped maintaining it years before I migrated over here. Sometime in the last month— possibly due to a server update— the code finally stopped working. So I spent some time over the last couple of days hacking together a solution which makes all the old posts available at most of the same URLs.2

If you want to poke around over there, I’ve also added an archive page.3

A fair showing

At Daily Nous, there’s discussion of how much philosophy sites figure in Google’s C4 data set— and so in the training set of Large Language Models. The Washington Post has a widget to search for the rank of specific domains.

This very site— this blog plus my other foofaraw— ranks 612,096th with about 38 thousand tokens.

My old blog ranks close behind at 625,716th with about 37k tokens.

Although the tool isn’t designed to give this kind of result, the two together would rank somewhere around 300,000th.

So very meta

During a commercial break while streaming the most recent episode of Would I Lie to You, I saw a new commercial for Meta (the company formerly known as Facebook). It concludes with the line, “The metaverse may be virtual, but the impact will be real.”

The jarring thing is the ad’s utter failure to imagine anything useful. It offers three examples of what’s coming. All of them involve people putting on goggles and gloves to enter virtual realities, which all feels very 1990s.

Continue reading “So very meta”