Only 7% of authors aware that their work has been used to train AI have given permission for such usage, according to a survey of 13,574 members of the Authors' Licensing and Collecting Society in the U.K. Furthermore, 77% of authors did not know whether their work had been exploited in this way. The findings, ALCS said, made clear “the need for licensing options that fairly compensate writers for the use of their work in training AI models.”
Authors were not necessarily opposed to usage of their works to train AI models, provided that they had given permission (91%), received fair compensation (96%), and were appropriately credited (87%), the survey found. Roughly 81% of respondents said that they would welcome a collective licensing arrangement for usage, with 72% expressing a preference for retaining opt-out rights.
ALCS CEO Barbara Hayes said that "the findings of our survey confirm much of what we already believed about writers' attitudes to the lack of remuneration and choice. But what also became clear is that writers are often in the dark about what is happening to their works, and subsequently they don't know how to feel about it. They have a lot of questions: How do we find out what has been used? How will any remuneration work? How will AI affect the careers of authors?” She added: “It is our belief that licensing offers the best solution for ensuring authors are recognised and fairly compensated for the use of their work in AI systems, if that is what they choose to do.”
Tom Chatfield, ALCS chair, noted that “behind these statistics lies a fundamental imbalance in how creative work is valued and respected in an algorithmic age. While tech companies rush to train ever-larger systems on vast libraries of human-made content, its creators are neither consulted nor compensated. Yet what's striking about the ALCS survey isn't just writers' concerns—it's their readiness to engage with solutions.” He added: “Writers aren't against technological progress. They're opposed to exploitation, alongside the confusions and category errors that a lack of transparency breeds.”
A version of this story previously ran in BookBrunch.