That writers and creatives are among the first to launch an offensive against AI technologies speaks to the level of threat that OpenAI poses to content creators at large. Just eight months into the existence of ChatGPT, we have lawsuits pending from the Authors Guild, Sarah Silverman, and others, as well as creative counterattacks from the world of fan fiction, with creators flooding the internet with gibberish and fake story lines in an effort to befuddle AI.
These swift efforts are not overreactive. We need to be concerned about the impact of AI on our industry, though as a publisher I’m less concerned that AI might scrape my press’s books for content than I am about all the ways in which we’ll have to contend with AI as an unwelcome interloper into every single thing we do. Recently, an author I’ve been working with on and off for a couple of years informed me that she’s going to use ChatGPT to finish her book and asked whether I thought she should disclose this information to potential publishers. My immediate reaction was judgment—how could she?—but that was followed by an innate protective instinct I have for my author-clients: no, don’t disclose it!
The conversation with myself did not end there, because of course I’d want any author submitting to my press to disclose that their book was written with the help of AI. But then, if a prospective author did disclose such information, would I acquire the book? On principle, I think not. But what if the author just used AI for research? Is that any different than hiring a research assistant? I can imagine a menu of services that AI might accomplish for a given author and publishers having different criteria for what is okay and not okay, depending on personal values.
At this moment, the line between AI as tool and AI as plagiarism is blurry to me. Anyone can ask ChatGPT to write an essay in the style of their favorite journalist. Anyone who can type can use AI to write drafts or do research or create outlines. Is a person plagiarizing if robot crawlers write a draft of their book that they then “finesse” and ostensibly write in their own words?
When I was a young editor, the publisher I worked for terminated an author for plagiarism. There was something about her prose that prompted me to type a line of it into Google to discover that the words existed there, verbatim, pulled into our pending book and just moved around ever so slightly. The author had done a lot of work, undoubtedly. She’d added some personal stories and transitions, but, as I cut and pasted line after line into the search engine, I was horrified to uncover just how much she’d lifted from other people’s articles. This felt to me like an act of betrayal. It was stealing. It was lying. I was filled with anger at the injustice of what she’d done. Now ChatGPT delivers those lines for the taking. Is it different just because a machine does the work of rephrasing or paraphrasing other people’s content? I don’t think it is.
As ChatGPT gets better and smarter, we’re going to be living with it and interacting with it in our daily lives. Some of us already are. Ethicists have suggested that AI needs to disclose itself, and this is a good idea, on its surface. But, where book publishing is concerned, it doesn’t matter if AI discloses that it’s AI if a person is determined to claim its output as their own.
So here we are. The future has arrived. Still, the publisher in me sees a silver lining, or a few—even as I stand at the brink of this brave new world feeling cynical and filled with more than a little bit of fear.
I believe AI will make us more discerning readers. More AI-generated writing will create a desire for content that we can recognize as human generated. We’ll see writers developing new language, syntax, and vocabulary, and reviews that call out AI-generated books.
I think writers will show up even more authentically and creatively on the page. Voice, already so important and individual, will be what readers seek out. I imagine a future in which good and better storytelling will be valued even more than it already is, and in which personalizing and making true meaning will be key.
I predict that creating from our own internal warehouses of imagination and experience, already a value most writers hold, will become a bright line in the sand. People who lean into AI too heavily cannot be thought leaders and conversation starters, and social pressure will serve as a strong deterrent to overreliance on AI. We will start to see authors being canceled when it’s revealed that AI wrote their books. We will coalesce in mutual understanding around acceptable usage of AI, and “real” and “original” will begin to take on new and more important meaning.
The worst fears about AI stem from what it might assist human beings to do, especially without any guardrails: to take content and call it our own (cheating or plagiarizing); to co-opt other people’s style for whatever use (stealing); to regurgitate and reproduce what already exists (flooding the content zone). But it might also challenge us to rise to new heights. Perhaps a tool that gives us access to all the world’s content will actually push more of us to honor and strive to better hone that singular internal voice, the very essence of what makes us human.
Brooke Warner is publisher of She Writes Press and SparkPress, president of Warner Coaching Inc., and author of six books on publishing and memoir.