A future author writing a history of human technology will take note of November 30, 2022. In a tweet thread that afternoon, @sama wrote, “today we launched ChatGPT. try talking with it here: http://chat.openai.com.... language interfaces are going to be a big deal, I think. talk to the computer (voice or text) and get what you want, for increasingly complex definitions of ‘want’!”

“Going to be a big deal” was an understatement. Just a month after its launch, in January 2023, ChatGPT became the fastest-growing consumer software application in history, with over 100 million users. Today, ChatGPT and other related generative AI offerings dominate headlines and conversations like no other technological innovation since the first iPhone.

For publisher attendees at the Frankfurt Book Fair and their organizations, the challenge AI presents is existential. If you have asked yourself, “Is AI going to take my job?” you probably crossed your fingers. You are just like everyone else in publishing. We all want to know what AI will mean for our livelihoods.

“Talking” chatbots and generative AI technology present an unprecedented challenge to trust and confidence in all types of published information. Whether for research papers by academics or essays for college students, ChatGPT generates text with ease, but we are well aware that it is not 100% accurate.

The speed of computing is governed by Moore’s Law. The speed of misinformation is governed by Twain’s Rule: “A lie can travel halfway around the world while the truth is still putting on its shoes.”

OpenAI knows that its chatbot isn’t perfect, and it makes no secret of it. The company has human trainers who review and rate ChatGPT’s responses not only for errors and falsehoods but also for language that is racist, misogynistic, and otherwise biased. This feedback-driven training process is expected to be repeated regularly, improving the model substantially over time.

But do we want to find ourselves in a world where 15%, 10%, even 5% of news and published research is fake or else unreliable? Do we have a choice? How much fake news and phony science will we decide is acceptable? Has trusted information become obsolete?

A decisive response to the misinformation crisis by scholarly and academic publishing would be a vigorous reaffirmation of “the integrity algorithm.” What do I mean by “algorithm?” I do not mean an anti-plagiarism computer code for authenticating research. I use algorithm in its basic sense, to refer to step-by-step actions for guiding decision-making. In scholarly and scientific publishing, sustained confidence in content has been built brick by brick with adherence to the scientific method, and cemented by peer review.

Contrast the “integrity algorithm” of journal publishing to the “attention algorithm” of social media. Online platforms attract user attention with content that elicits strong emotional responses. They subsequently monetize that attention by surrounding the content with advertising. The result is polarization, alienation, isolation. Trust, like the trust scholarly publishing is built on, drives us to very different places: community and collaboration.

The Artificial Intelligence Act, proposed last spring and passed by the European Parliament, aims to address the risks AI poses to people and their privacy. In June, the White House Office of Science and Technology Policy (OSTP) issued an AI Bill of Rights intended to protect Americans from any biases and discriminatory practices arising from the technology. Responding similarly to public pressure, publishers may soon turn to self-regulation for various uses of AI. They will seek models with histories of self-examination and time-honored adherence to high principles. The integrity algorithm that guides decision-making in scientific and scholarly journal publishing—constructed of a code of conduct, not computer code—has much to offer as a prototype.

Christopher Kenneally is host of Velocity of Content, Copyright Clearance Center’s podcast series.

Return to the main feature.