Fifteen years ago, I conducted a small study testing the error-correction tendency of Wikipedia. Not only is Wikipedia different now than it was then, the community that maintains it is different. Despite the crudity of that study’s methods, it is natural to wonder what the result would be now. So I repeated the earlier study and found surprisingly similar results.
That’s the abstract for a short paper of mine that was published today at First Monday. It is a follow-up to my earlier work on the epistemology of Wikipedia.
I considered taking the paper in a direction that would have made it more relevant to current problems, by drawing a comparison between the moral panic about Wikipedia fifteen years ago and the panic about generative AI today. To a large extent, the panic about Wikipedia faded away rather than being resolved. We came to accept it as being not all that bad. So one might think that bullshit from AI is something we will get used to, rather than something that will get fixed.
In a talk at UAlbany last Spring, Jim Hendler drew this same comparison. He noted, however, that Wikipedia hasn’t just become part of our epistemic lives because we changed our standards. The culture and practices of Wikipedia were changed to address at least some of the worries.
Something like ChatGPT doesn’t have culture and practices in the same way. So the open question is whether generative algorithms can be refined in a way that ameliorates the problems with them— if not to solve the problems completely, at least to meet our epistemic demands halfway.