Who’s doing the real work of training AI, and who is most affected when it goes awry? In Code Dependent (Holt, June), Madhumita Murgia, artificial intelligence editor at the Financial Times, discusses the human cost of the increasingly ubiquitous technology.

What prompted your interest in tech?

In 2014, as a young editor at Wired UK, I was learning about how tech shapes society and how it shapes us as individuals. I got intrigued by cookies. We have these ads following us online: What do those data profiles look like? Who’s collecting them? How accurate are they? This was pre-GDPR [General Data Protection Regulation, an E.U. privacy initiative], so I went down a rabbit hole of data brokers and ended up working with Eyeota. I found surprisingly accurate details, not just on what I buy or where I travel, but on my behavior and my personality. It was the first time it struck me how close these data sets can come to defining someone’s behaviors and predicting what they will do.

How does AI manifest in daily life?

Generative AI has changed everything. Where the tech falls down is in how it’s implemented. In Amsterdam, the mayor’s office ran an algorithm that generated the records of hundreds of kids, mostly boys, some of whom had committed crimes and some who were just brothers of at-risk kids. The intention was not to jail them, but to send in the apparatus of public services—educators, social workers—to support families. But the way that it was implemented was to just send a letter, often to single mother families, which gave them the feeling of local government bursting into their homes, telling them they’re doing wrong and that their sons are likely to commit crimes in the future. The systems aren’t becoming more human—they’re probabilistic software. There’s no consciousness or cognition there.

Who’s training these models?

The technology does not “learn” independently, and it needs humans, millions of them, to power it. One thing I wanted to do is draw back the curtain on how this tech is built. There’s a misconception that because it’s called artificial it has some sort of self-learned cognitive ability to train itself. The training is done by humans, largely in the Global South, who are drawing little boxes, editing text snippets, and labeling text for ChatGPT. Ultimately, are data laborers benefiting from being part of the AI supply chain? They don’t know they’re part of a supply chain on the other end of which is a trillion-dollar company. The whole point of digital work is to lift everyone up, to economically empower people, and that’s not the case here. They’re employed, but they’re not seeing the upsides of the wealth-building. We need to figure out, as a society, what someone should be paid to do this job.

The book’s chapter titles strike an intimate tone—“your body,” “your health,” “your freedom.” What do you want readers to take away?

My hope is for people to say: I can have a voice; I can stand up for what kind of world this should be. Should we allow crime-predicting algorithms, or technology that targets who gets killed in war? What are the guardrails we should impose on these systems? How can we demand accountability? My hope for AI isn’t that it creates a new, upgraded species without the messiness of humanity, but that it helps us ordinary, flawed humans live our best and happiest lives.

Return to main feature.