The 1984 movie The Terminator popularized a long-running science fiction theme: one day, human-built machines could develop independent thought and turn against their hapless creators. In 1997, chess grandmaster Garry Kasparov played his second match against Deep Blue, a program run by an IBM supercomputer. Deep Blue wasn’t the first program to play against a human, but it was the first to beat a reigning world champion.
Today, generative AI—which ingests text, sound, images, and other data and builds its own output in response to prompts from users—is everywhere, embedded in everyday interactions with phones, computers, and other personal tech. For some, this is a welcome convenience. For others, a terrifying intrusion. Forthcoming books contemplate the ongoing tension.
OK computer
Is AI a net positive or a net negative? The question lies at the heart of Our Next Reality (Nicholas Brealey, June), with tech entrepreneur Alvin W. Graylin taking a favorable view and Louis Rosenberg, CEO of Unanimous AI, sounding a cautionary note. By structuring the book as a debate, the coauthors hope to guide professionals and policymakers to the solutions and regulations that could allow access to the benefits without the perils.
Rosenberg draws comparisons between AI and the Wild West of social media, and warns against allowing the technology to progress, without sufficient guardrails, according to the dictates of profit. “Fifteen years ago, social media was seen as utopian technology that would break down barriers,” he says. “Instead it created a lot of problems because there were no protections in place. Right now, you see targeted content that’s chosen to influence you. Five years from now, most of us will be getting our content by conversationally speaking to these systems.” In other words, rather than typing into ChatGPT, users will speak with an entity that looks and sounds like a person: for adults, a sympathetic face; for kids, a cartoon character.
Though Rosenberg acknowledges the threat of bad actors, his main focus is on the perfectly legal actions of large corporations. “We shouldn’t allow them to use an advertising business model that trades user privacy for conversational influence,” he says. “Advertisers will love to pay for that. We’re not prepared for this. Regulators need to block this whole pathway.” The danger, he emphasizes, is that advertisers aren’t just selling mattresses and T-shirts, but propaganda and misinformation.
Gary Marcus, a professor of psychology and neural science at NYU and the author of Taming Silicon Valley (MIT, Sept.), agrees that the financial incentives comprise the greatest risk. “I’ve written a number of science books; this is my activism book,” he says. “AI development is run by tech companies that don’t have the public interest at heart, and the U.S. government hasn’t moved quickly to protect citizens. It’s a so-called dual-use technology—it could be used for good or harm. But generative AI is unreliable, fallible. It doesn’t understand the difference between fact and fiction. It can manipulate elections; it can deep-fake porn. Pandora’s box has been opened.”
Marcus sees AI as an accelerant for issues already inflamed by social media. Russia, he points out, spent $1 million a month in 2016 to influence the U.S. presidential election—a limited, expensive effort. Today, spreading disinformation costs next to nothing. “We need layered oversight, including something like the FDA process,” he says. “The government has been dragging its feet. I don’t want to ban AI any more than I want to ban airplanes, but we need regulatory regimes.”
Payal Arora, professor of inclusive AI cultures at Utrecht University and author of 2019’s The Next Billion Users, notes that the rise of AI is met with more optimism in parts of the world most in need of economic growth. Her new book, From Pessimism to Promise (MIT, Sept.), explores how the technology benefits entrepreneurs and users in the Global South. In remote villages in India, speakers of less common languages and dialects can access AI-translated entertainment and information. Conversely, companies in India that cater to clients in Australia and the U.S. can provide accent-free voiceover services, allowing them to focus on content creation, not speech training.
When it comes to using AI to fill emotional needs, Aurora says, “people sit in high judgment without understanding the realities” of lonely Bangladeshi workers, for instance, who live eight to a room in Dubai, trading romantic messages with chatbots on their phones. She also notes AI’s potential for activists; in Iran, posters can feature generated images of women without hijab without putting real people at risk.
Reservations about the technology are doing wealthy countries no favors, she says. “We’re so preoccupied with containing technology that we haven’t put enough energy into building something rather than breaking things down. The U.S. has the most resources in the world, yet India’s patents and technology are almost on par. Putting egos aside, why not learn from where we can get our best ideas?”
AI, take the wheel
Authors interviewed for this piece expressed skepticism at the idea that machines can replace human imagination; human effort, however, is another story. In Mastering AI (S&S, July), Jeremy Kahn, who writes Fortune’s weekly Eye on AI newsletter, aims to convey how AI “is going to change how we think, how we work, the whole economy, government, science and medicine,” he says, and poses “an existential risk to civilization.”
He sees few positives in humans relying on AI for emotional input. A chatbot therapist, he says, is definitionally inauthentic; “I understand” is an untrue statement from a machine incapable of cognition. “I think there’s a very real danger that AI will increase social isolation,” he says. “People will become addicted to chatting to an AI that is never critical or judgmental and eschew the more messy, difficult interactions that constitute true friendship with real people.”
While AI can’t replace creativity, Kahn explains, it can replicate enough of a workforce’s regular tasks to pose a serious threat. “We may see wage depression in many fields,” he says. “AI is a copilot tech—it can help you be more productive, or work at a higher level of expertise, but it means many more people can do the same task. The most sought-after people can command more for their labor than before, but if you’re average you have a problem.”
Shannon Vallor, who specializes in the ethics of data and artificial intelligence at the University of Edinburgh, takes a more philosophical view in The AI Mirror (Oxford Univ., June), underscoring that AI needs human intervention to ensure its ethical use. Generative AI is not inherently separate from humanity, she says, but rather reflects it. “It’s not showing us the face of our future, but our past. AI is fundamentally conservative. Literally: it conserves the behaviors to which we’re habituated, and can’t show us where we need to venture.”
The problem, Vallor says, is that AI only knows where human ingenuity has been, and, lacking the ability to create anything on its own, can only reflect the past, with all its failures and biases. As an example, she notes a 2015 algorithm used in hospitals to triage severely ill patients. The data it trained on, from U.S. hospitals and clinics, reflected historical medical neglect for Black patients, who received fewer follow-ups and inferior medications. So, Vallor explains, “the unjust treatment was picked up and projected into the future.” PW’s starred review said Vallor has “a fresh and fascinating take on the perils and promises of a much-debated technology.”
Another call for caution comes from Neil D. Lawrence, a professor of machine learning at Cambridge University, former director of machine intelligence at Amazon, and author of the forthcoming PublicAffairs release The Atomic Human (Sept.). Lawrence “understands that we’re firmly in the age of AI, which has many essential uses in our society, both personal and business,” says PublicAffairs contributing editor John Mahaney. “But we don’t really have a handle on the genie that we’re letting out of this bottle. There’s going to be a cognitive dependence on what these machines can do.”
Lawrence reiterates the importance of focusing on the fundamental differences between AI and humans. His book’s title riffs on the Greek philosopher Democritus, who speculated that physical matter cannot be reduced indefinitely and that an indivisible core, or atom, remains. Mahaney says Lawrence applies that notion to humanity: “What is the core of human intelligence? Until we understand that, we’re not going to be able to harness this technology in a way that’s good for humanity. What makes us unique is our creativity.”
He and others interviewed for this piece agree that innovation and ingenuity will be the driving force behind future human success. Now, nobody tell the robots, okay?
Liz Scheier is a writer, editor, and product strategist living in Washington, D.C. She is the author of the memoir Never Simple.
Read more from our Business Books feature:
I, Robot: PW Talks with Madhumita Murgia
In ‘Code Dependent,’ Madhumita Murgia, artificial intelligence editor at the ‘Financial Times,’ discusses the human side of AI.
6 Books on AI in the Workplace
These forthcoming books discuss the future of the workplace amid AI’s ever-increasing reach.
2 New Books Examine ChatGPT
How did ChatGPT get here? Where is it going? These new books offer some answers.