10.2 C
United Kingdom
Tuesday, October 21, 2025

OpenAI Says It’s Working With Actors to Crack Down on Celebrity Deepfakes in Sora


OpenAI said Monday it would do more to stop users of its AI video generation app Sora from creating clips with the likenesses of actors and other celebrities after actor Bryan Cranston and the union representing film and TV actors raised concerns that deepfake videos were being made without the performers’ consent.

Actor Bryan Cranston, the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and several talent agencies said they struck a deal with the ChatGPT maker over the use of celebrities’ likenesses in Sora. The joint statement highlights the intense conflict between AI companies and rights holders like celebrities’ estates, movie studios and talent agencies — and how generative AI tech continues to erode reality for all of us.

Sora, a new sister app to ChatGPT, lets users create and share AI-generated videos. It launched to much fanfare three weeks ago, with AI enthusiasts searching for invite codes. But Sora is unique among AI video generators and social media apps; it lets you use other people’s recorded likenesses to place them in nearly any AI video. It has been, at best, weird and funny, and at worst, a never-ending scroll of deepfakes that are nearly indistinguishable from reality.

Cranston noticed his likeness was being used by Sora users when the app launched, and the Breaking Bad actor alerted his union. The new agreement with the actors’ union and talent agencies reiterates that celebrities will have to opt in to having their likenesses available to be placed into AI-generated video. OpenAI said in the statement that it has “strengthened the guardrails around replication of voice and likeness” and “expressed regret for these unintentional generations.”

OpenAI does have guardrails in place to prevent the creation of videos of well-known people: It rejected my prompt asking for a video of Taylor Swift on stage, for example. But these guardrails aren’t perfect, as we’ve saw last week with a growing trend of people creating videos featuring Rev. Martin Luther King Jr. They ranged from weird deepfakes of the civil rights leader rapping and wrestling in the WWE to overtly racist content.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


The flood of “disrespectful depictions,” as OpenAI called them in a statement on Friday, is part of why the company paused the ability to create videos featuring King.

Bernice A. King, his daughter, last week publicly asked people to stop sending her AI-generated videos of her father. She was echoing comedian Robin Williams’ daughter, Zelda, who called these sorts of AI videos “gross.”

OpenAI said it “believes public figures and their families should ultimately have control over how their likeness is used” and that “authorized representatives” of public figures and their estates can request that their likeness not be included in Sora. In this case, King’s estate is the entity responsible for choosing how his likeness is used. 

CNET AI Atlas badge tag

This isn’t the first time OpenAI has leaned on others to make those calls. Before Sora’s launch, the company reportedly told a number of Hollywood-adjacent talent agencies that they would have to opt out of having their intellectual property included in Sora. But that initial approach didn’t square with decades of copyright law — usually, companies need to license protected content before using it — and OpenAI reversed its stance a few days later. It’s one example of how AI companies and creators are clashing over copyright, including through high-profile lawsuits.

(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)  



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles