As was discussed at the Frankfurt Rights Meeting Webinar last month, the law is usually reactive at best to questions of new technology. At times it can feel like enquiring as to the traffic restrictions on Mars. There’s an unavoidable but frustrating time lag as relevant rules and precedents emerge. That’s still true with artificial intelligence, but an extensive amount of work is currently being undertaken to chart an applicable regulatory landscape, with a renewed zeal to try and address the challenges that AI presents.
It’s renewed, because while there has been regulatory interest for many years—the EU set up an experts’ group in 2018—there has been a marked uptick in activity in 2023. Key European legislation is the AI Act, currently being finalized in Brussels, that seeks to regulate AI systems by tiers of risk from outright bans on the unacceptable to codes of conduct where concerns are lower. Other territories may follow suit as lawmakers globally consider moves away from self-regulation and soft law initiatives.
There remains a concern that over, or premature, regulation risks stifling innovation, driving beneficial commercial activity to other jurisdictions. It’s reminiscent of the emergence of e-commerce 20 years ago. Some territories rushed to regulate with new rules only to have to revisit those as technology shifted. Crucially, AI law is emerging against a backdrop of re-evaluation of that earlier regulation, in particular as to the scope of responsibilities for platforms and intermediaries.
Which leads to the effect on creative content and rights, which has obviously been at the heart of the publishing sector’s concerns in reaction to the impact of generative AI. Those have been reflected in recent additions to the draft AI Act with transparency and disclosure obligations for AI inputs plus ongoing obligations in respect of AI outputs introducing some degree of specific content regulation. Whether they are realistic protections for rightsholders to identify and opt-out of AI data sets will emerge in the coming months.
Of course, as other speakers on the Webinar acknowledged, AI systems—for all their current limitations and deficiencies whether legal, ethical, or artistic (mediocrity was one descriptor applied to the latter)—are already being used by some authors and publishers. As with digitization and text and data mining before them, they provide an opportunity, or at least a potential tool to adopt. As such, supporting not replacing human creativity, even if only used to better enable, say, discoverability or anti-piracy.
So, the publishing sector will need to operate within, as well as seek protection from, any new rules and regulation. As we wait, some self-help is necessary:
First, because the legal landscape is still emerging now is the time to engage with industry bodies, governments, and regulators not just in consultation and lobbying efforts but, as we noted on the Webinar, wider sector communication;
Also, any creative business or activity from individual creator to international publishing house needs an active policy or at least a stance on AI—just as for defamation, privacy, or IT security;
Even if that policy or stance is a prohibition—and many want to stay well clear, especially of generative AI—that needs to be communicated and reflected in documentation including contracts and relevant licensing terms;
Where AI systems are being used, the basis of that use needs to be fully understood—from recording the role and scope of reliance to due diligence on how system T&Cs deal with rights and liabilities;
Since there are still so many unanswered rights questions AI content will need ring-fencing. Anyone who has managed open-source software or Creative Commons content will already have experience of that.
The bottom line may be that technical solutions are needed to manage this technology—and to back-up any new law which does emerge. Could it still be as Charles Clark famously said at Frankfurt in the 1990s: “The answer to the machine is in the machine?”
Duncan Calow is partner at DLA Piper.