cover image AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference

AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference

Arvind Narayanan and Sayash Kapoor. Princeton Univ, $24.95 (352p) ISBN 978-0-691-24913-1

Narayanan (coauthor of Bitcoin and Cryptocurrency Technologies), a computer science professor at Princeton University, and Kapoor, a PhD candidate in Princeton’s computer science program, present a capable examination of AI’s limitations. Because ChatGPT and other generative AI software imitate text patterns rather than memorize facts, it’s impossible to prevent them from spouting inaccurate information, the authors contend. They suggest that this shortcoming undercuts any hoped-for efficiency gains and describe how news website CNET’s deployment of the technology in 2022 backfired after errors were discovered in many of the pieces it wrote. Predictive AI programs are riddled with design flaws, the authors argue, recounting how software tasked with determining “the risk of releasing a defendant before trial” was trained on a national dataset and then used in Cook County, Ill., where it failed to adjust for the county’s lower crime rate and recommended thousands of defendants be jailed when they actually posed no threat. Narayanan and Kapoor offer a solid overview of AI’s defects, though the anecdotes about racial biases in facial recognition software and the abysmal working conditions of data annotators largely reiterate the same critiques found in other AI cris de coeur. This may not break new ground, but it gets the job done. (Sept.)