cover image These Strange New Minds: How AI Learned to Talk and What It Means

These Strange New Minds: How AI Learned to Talk and What It Means

Christopher Summerfield. Viking, $32 (384p) ISBN 978-0-593-83171-7

This superlative study from Oxford University neuroscientist Summerfield (Natural General Intelligence) explores how large language models work and the thorny questions they raise. He explains that neural networks learn by guessing the relationships between data points and developing “weights” that prioritize the processing pathways most likely to produce correct answers. Wading into debates around whether LLMs possess knowledge or merely proffer predictions, Summerfield makes the provocative argument that human learning is essentially predictive, depending on the same trial-and-error strategy LLMs use. According to the author, this indicates human knowledge is comparable to AI knowledge. Summerfield is remarkably levelheaded in his assessment of AI’s capabilities, suggesting that while obstacles to designing AI assistants that can book trips and pay bills may be resolved in the next several years, it’s unlikely LLMs will ever become sentient given their inability to experience physical sensation. The lucid analysis also makes clear that technological improvements will never overcome such pitfalls as determining when to provide answers as definitive or up for debate, since such problems depend on subjective judgment. By inquiring into the nature of knowledge and consciousness, Summerfield brings some welcome nuance and clarity to discussions of LLMs. In a crowded field of AI primers, this rises to the top. Agent: Rebecca Carter, Rebecca Carter Literary. (Mar.)