The Academy Just Drew a Line in the Sand on AI in Film

The Academy of Motion Picture Arts and Sciences dropped some new rules on Friday, and they’re about as clear as a studio memo can get: if an AI made it, the Oscars don’t want it.

The specifics are straightforward enough. Only performances that are “credited in the film’s legal billing and demonstrably performed by humans with their consent” are eligible for Academy Awards. Screenplays must be “human-authored.” The Academy also reserved the right to dig into how films used AI and verify human authorship, essentially giving themselves a fact-checking mechanism they’ll probably need sooner rather than later.

This isn’t coming out of nowhere. AI-generated actors are already making headlines, from the independent film using a deepfake version of Val Kilmer to the ongoing saga of AI “actress” Tilly Norwood. The entertainment industry has been grappling with this stuff since the 2023 actors’ and writers’ strikes, when concerns about AI basically consumed the negotiation rooms. The Academy clearly decided waiting around wasn’t going to cut it anymore.

Drawing the Line

What’s interesting here is that the Academy is being explicit about what counts and what doesn’t. That specificity matters because it closes loopholes. You can’t sneak in an AI performance and claim the actor consented if they never did. You can’t use AI to write 90% of a screenplay and call the human who touched it up the author.

That said, these rules reveal how messy this conversation actually is. The Academy had to create definitions. They had to figure out what “demonstrably performed by humans” even means in an age when the line between human performance and AI enhancement gets blurrier every few months. They’re trying to write rules for a Technology that’s changing faster than policy frameworks can handle.

The pushback isn’t confined to Hollywood either. Publishers have already pulled books suspected of using AI. Writers’ groups are implementing their own eligibility restrictions. What we’re watching is an industry-wide scramble to establish boundaries before those boundaries become impossible to enforce.

The Real Question

Here’s what makes this complicated: the Academy’s rules work fine if everyone agrees to follow them and if AI stays in its current form. But technology rarely cooperates with expectations. Video models keep improving. Deepfakes keep getting more convincing. The gap between “definitely human” and “probably AI” keeps shrinking.

The Academy seems to understand this, which is why they included that line about requesting more information on AI usage. They’re essentially building in an audit function, anticipating that they’ll need to verify claims regularly. Smart move, but also a tacit admission that policing this is going to be an ongoing headache.

What’s unspoken in these rules is the larger tension. Nobody’s saying AI can’t be used in filmmaking at all. Visual effects studios already use generative tools for backgrounds and environments. The line the Academy drew is specifically about taking on-screen credit for work you didn’t do. But as AI becomes more integrated into the creative process, that line might become harder to maintain.

The real test isn’t whether the Academy can enforce these rules today. It’s whether they can adapt them quickly enough as the technology evolves, or whether we’ll spend the next five years watching filmmakers find increasingly creative ways to circumvent regulations written for a version of AI that’s already becoming obsolete.

Written by

Adam Makins

I’m a published content creator, brand copywriter, photographer, and social media content creator and manager. I help brands connect with their customers by developing engaging content that entertains, educates, and offers value to their audience.