During the past year, generative artificial intelligence (AI) has taken the world by storm, with enthusiasts projecting that it will solve everything from world hunger to climate change, and detractors warning that it threatens humanity’s very existence—and probably your job. But what exactly is generative AI? In his new book, Generative Artificial Intelligence: What Everyone Needs to Know (Oxford University Press, Feb.), Silicon Valley veteran Jerry Kaplan explores the vast potential—and risks—of a tool that enables us to tap into the accumulated wisdom of humankind.
Kaplan, an entrepreneur and inventor, teaches at Stanford University about the social and economic impacts of artificial intelligence. In his book, he explains why generative AI is a completely different animal.
“Just a few years ago, most AI systems were tuned for specific tasks, such as recognizing pictures of cats or steering a car,” Kaplan says. “Generative AI is quite different, in that it’s much more general. The same system that can draft a college essay can compose a sonnet, explain how to change a flat tire on a car, plan a Thanksgiving meal, or offer love advice. It can also diagnose medical conditions, write a legal brief, summarize documents and meetings, and perform many other tasks that previously required human expertise and intelligence. This is a major advance that will yield innumerable benefits for society.”
It also creates many new risks. “Any technology this powerful is potentially very dangerous in the wrong hands,” he says. “Authoritarians can use it to spread disinformation and suppress dissent; frightening new weapons will change the nature of war; and criminals can use it to steal, cheat, and swindle people on a previously unimaginable scale.”
Kaplan is encouraged by the extensive work on creating protocols and procedures to mitigate these risks. In his own life, he has developed precautions, such as establishing a “safe word” with his wife to authentic each other and protect themselves from voice clones asking for, say, banking PINs.
And he has good news: The robots aren’t plotting against us. Generative AI systems can seem human—for example, they can show empathy or lie to cover up their mistakes. “But they are not nascent beings with their own independent thoughts, feelings, and desires,” Kaplan says. “We don’t have to worry about them rising up and taking over, because they don’t want anything. There is no ‘they,’ so ‘they’ are not coming for ‘us.’ A robotic generative AI is not going to suddenly wake up, realize it’s being exploited, inexplicably grow its own goals, take over the world, and possibly wipe out humanity, as you often see in movies.”
Well, that’s a relief! But what about our jobs? On this front, Kaplan advises readers to see generative AI as a career-enhancing tool, rather than a threat to their livelihoods. For example, as he wrote the book, Kaplan used generative AI as a copy editor that could suggest illuminating analogies or interesting turns of phrase. He also used it to create a short summary to begin each chapter, as he discloses in the book. He sees publishers using generative AI throughout the acquisition and production process—for example, to compare and contrast new book proposals with a publisher’s entire catalog.
It will also help us manage our daily lives. Kaplan sees many of us using generative AI as a personal assistant that can renew driver's licenses, prepare travel itineraries, draft wedding toasts, and even tackle sticky interpersonal situations like dodging social obligations. “Your electronic agent will be at your service without judgment or disdain,” he says.
Navigating this new world requires adopting a new mindset about what machines can do. “As during the Copernican revolution, when we accepted the idea that we weren’t at the center of the universe, we need to get comfortable with sharing our world with highly intelligent and perceptive machines,” he says. “I, for one, am looking forward to it!”