The UK government just threw up its hands and admitted it has no idea what to do about AI and copyright. Which is honestly refreshing compared to pretending everything’s sorted.
Just weeks ago, they had a plan. A bad one, according to pretty much everyone who actually creates things for a living. The idea was simple enough: let AI companies train their models on copyrighted material, but give creators an opt-out button. Sir Elton John wasn’t having it. Neither was Dua Lipa. Neither was basically the entire creative sector.
So now, Technology Secretary Liz Kendall is saying the government “has listened” and no longer backs that approach. Which sounds good until you realize they’ve replaced it with… nothing. Complete uncertainty.
When Everyone’s Unhappy, Is Anyone Right?
Here’s the thing that’s making this so messy. Both sides have legitimate points, and the government actually acknowledged this in their report this week.
The creative industries are right that artists should have a say in how their work is used. The AI sector is right that these models need training data to function properly. You can’t build useful AI systems in a vacuum. But the two positions are fundamentally at odds, and there’s genuinely no consensus on how to square that circle.
The impact assessment showed both industries matter enormously to the UK economy. Culture is described as a “world-leading national asset.” Meanwhile, the AI industry is growing 23 times faster than the rest of the economy. So the government can’t exactly tell one of them to get lost.
Mandy Hill from the Publishers Association called this a victory over “self-interest of a handful of large corporations,” which is a nice way of framing it. But she also made clear the government hasn’t ruled anything out. Copyright law supposedly already protects creators, but here’s the catch: if no one enforces it, does it actually matter?
The Clock is Ticking
Anthony Walker at Tech UK pointed out something uncomfortable: the UK is trying to lead the G7 in AI adoption while its competitors move ahead. You can’t lead a race if you’re still arguing about which direction to run.
This isn’t just about technology policy wonkery either. Last year, some of Britain’s biggest artists wanted an amendment to the Data Bill that would force tech companies to declare when they’re using copyrighted material for AI training. It didn’t pass. Sir Elton compared the whole thing to “theft on a high scale,” and honestly, from a creator’s perspective, you see why he’d feel that way.
The government said it won’t reform copyright laws “until we are confident they will meet our objectives.” Which is polite for “we’re kicking this down the road.” But roads don’t go on forever, and neither do artists’ patience levels.
What Does This Actually Mean?
For now? Nothing concrete changes. Creators still can’t be sure their work won’t end up in some AI training dataset. Tech companies still don’t know what the legal landscape will look like six months from now. The government gets to say it’s “thinking carefully” about this.
The real tension here is that you can’t have it both ways without inventing some entirely new framework. Either AI companies need to negotiate licenses with creators before training, which is expensive and complicated. Or they use existing work with opt-outs that creators probably won’t know about. Or you create some kind of collective licensing scheme that nobody’s quite figured out yet.
None of these options are popular with both sides, which is probably why the government has suddenly decided it has no preferred position. The honest answer is that business interests and creative interests are going to clash on this one no matter what, and someone’s going to be unhappy.
The government committed to getting this right, which is nice. But the question nobody’s asking is whether getting it right is actually possible, or if we’re just going to end up with whatever compromise nobody wanted in the first place.


