When the Breaking Bad actor spotted his AI-generated likeness on OpenAI’s video generator, it sparked changes that could reshape how AI handles celebrity deepfakes.
OpenAI just learned a hard lesson: you can’t tell users not to create unauthorized celebrity deepfakes and then let them do exactly that.
What Happened?
When OpenAI launched Sora 2 on September 30th, their cutting-edge AI video generator came with a promise: you couldn’t recreate real people’s faces without permission. Users would need explicit consent through something called a “cameo” feature.
Reality? That policy had more holes than Swiss cheese.
Within days, the platform was flooded with AI-generated videos of everyone from Bryan Cranston to the late Michael Jackson, plus copyrighted characters like Ronald McDonald. The guardrails weren’t guarding much at all.
How Bryan Cranston Triggered the Changes
The Breaking Bad star didn’t stay quiet. He took his concerns to SAG-AFTRA (the union representing over 150,000 film and TV performers), triggering a chain reaction that led to OpenAI actually strengthening its safeguards.
On Monday, OpenAI, SAG-AFTRA, and several talent agencies released a joint statement confirming they’re working together to “ensure voice and likeness protections in Sora 2.” OpenAI CEO Sam Altman emphasized the company is “deeply committed to protecting performers from the misappropriation of their voice and likeness.”
Why This Matters
This isn’t just celebrity drama—it’s about the future of work and identity in the AI age.
For Hollywood: Artists have been nervous about AI stealing their work and identity for years. While some creatives quietly use AI tools, tensions remain high. SAG-AFTRA even faced backlash last year for trying to license voice actors’ work to AI companies—many performers opposed any collaboration whatsoever.
For copyright holders: Talent agency CAA blasted OpenAI before Monday’s announcement, accusing them of exposing clients and their intellectual property to “significant risk.” Videos featuring SpongeBob, Pikachu, and Mario had been spreading across the internet with Sora watermarks clearly visible.
For everyone else: If AI can convincingly replicate anyone’s face and voice without permission, what does that mean for truth, consent, and personal identity?
What’s Different Now
Try to generate videos of copyrighted characters or real people on Sora now, and you’ll hit a wall. The app returns an error message saying your request “may violate our guardrails” around “third-party likeness” or “similarity to third-party content.”
It’s unclear if OpenAI changed its overall policy on copyrighted content (they haven’t responded to requests for comment), but the technical guardrails are clearly stronger.
The Exception: Some Public Figures Are Playing Along
Here’s where things get interesting. Some public figures are embracing their AI dopplegangers.
Sam Altman himself has encouraged people to make deepfakes of him—including videos showing him sticking his head out of a toilet or shoplifting at Target (yes, really, which raises separate concerns about fake surveillance footage).
YouTuber and boxer Jake Paul, an OpenAI investor, has allowed countless fake videos depicting him in various scenarios and says the team is “making the internet fun again.”
What Comes Next
The joint statement supports the NO FAKES Act, proposed legislation that would hold people, companies, and platforms liable for unauthorized deepfakes. Introduced in the Senate last April, the bill hasn’t moved forward in Congress yet.
SAG-AFTRA President Sean Astin praised the opt-in approach: “Bryan Cranston is one of countless performers whose voice and likeness are in danger of massive misappropriation by replication technology.”
Cranston expressed cautious optimism: “I’m grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work, respect our personal and professional right to manage replication of our voice and likeness.”
The Takeaway
This case shows how AI policy often works in practice: companies announce rules, those rules prove inadequate, public pressure forces real change. Whether these strengthened guardrails will hold up long-term remains to be seen—but at least one Hollywood legend made sure OpenAI had to take the issue seriously.
For now, Walter White’s digital twin has been put back in the box. But the broader questions about AI, consent, and identity? Those are just getting started.