In These Strange New Minds: How AI Learned to Talk and What It Means (Viking, Mar.), neuroscientist Christopher Summerfield explores how large language models work.
What do you think are the most troubling aspects of AI?
Currently, the two most significant harms from AI are intimate image abuse, which entails child sexual abuse material and deepfake pornography, and the use of synthetic media for financial fraud. Over the long term, I’m less concerned about runaway capabilities of individual agents and more concerned about unexpected outcomes when AI models are allowed to interact with each other. This is likely to create instabilities, like the volatility we see in financial markets due to algorithmic trading. I also think the increasing personalization of AI models will have worrisome impacts by creating forms of dependence between humans and AI systems.
What can be done about the tendency of AI programs to provide false answers?
One of the main challenges is that people can’t agree on what’s true, and so they disagree about what AI models should say. AI models are increasingly being used for fact-checking, though they’re still less reliable than human equivalents. To make progress, we need more detailed and reliable data with which to train large language models, and we need models that are better able to estimate the reliability of their answers. The former may come from human trainers with expert knowledge in specific domains, and the latter from new technical advances in machine learning.
You suggest AI should “represent minority views without giving false equivalence to extreme and moderate positions.” Can you elaborate on that?
Every media outlet wishing to represent broad constituencies faces this challenge. If lots of people hold potentially harmful views about immigrants, should AI represent those views? In other media, this has led to fragmentation, whereby different outlets represent different political perspectives. As AI proliferates, this may happen too. You can already see the beginnings of this with Grok, which Elon Musk touts as an alternative to what he perceives to be an overly “woke” ChatGPT.
Will AI ultimately become sentient?
I don’t think we’ll ever know, but in the end, it won’t matter. As AI systems become more capable and humanlike, we’ll treat them “as if” they’re sentient, just like we do each other.
What changes in our interactions with AI might we expect soon?
I think we’ll see models tailored for the beliefs and preferences of individuals. This will be an inevitable part of the drive for personalized assistive technologies, which is what all the major developers are racing to build. We’ll also see technologies able to take more effective actions on digital platforms. Anthropic just released a new model that can do just that, by taking over your mouse and keyboard to solve a task for you.