YouTube has announced its support for the “Nurture Originals, Foster Art, and Keep Entertainment Safe” (NO FAKES) Act of 2024, which is a bipartisan bill, introduced by Senator Chris Coons and Senator Marsha Blackburn, that aims to protect people from unauthorized deepfakes and their use across digital platforms.
The NO FAKES act aims to hold individuals and/or companies liable if they produce unauthorized digital replicas of any individual in a performance, while platforms would also be held responsible for knowingly hosting an the same.
Which is a difficult area of the legislation, in defining responsibility over hosting such, but YouTube’s committed to working with these new rules to develop better systems.
As per the bill:
“Generative AI has opened new worlds of creative opportunities, providing tools that encourage millions of people to explore their own artistic potential. Along with these creative benefits, however, these tools can allow users to exploit another person’s voice or visual likeness by creating highly realistic digital replicas without permission.”
The bill refers to an AI-generated song that replicated Drake’s voice, and an advertisement which used an AI-generated version of Tom Hanks.
Misuses like this are only going to become more convincing as the technology continues to approve, which is why this legislation serves an important purpose in providing legal recourse for misrepresentations through the same.
And now, YouTube is committing to enforcing the NO FAKES bill across its platform.
As per YouTube:
“We believe AI holds incredible potential, but unlocking it responsibly means putting safeguards in place. The NO FAKES Act provides a smart path forward because it focuses on the best way to balance protection with innovation: putting power directly in the hands of individuals to notify platforms of AI-generated likenesses they believe should come down. This notification process is critical because it makes it possible for platforms to distinguish between authorized content from harmful fakes.”
It’s the latest in YouTube’s commitments to combating AI fakes, with the platform also supporting the TAKE IT DOWN Act which criminalizes non-consensual intimate imagery.
YouTube’s also added likeness control tools to help people detect and manage how AI is used to depict them on YouTube, as well as a pilot program with the creative industry that gives some of the world’s most influential figures access to advanced detection and removal requests.
It’s a key area of concern, and while we haven’t seen the expected flood of AI fakes that many anticipated before last year’s U.S. election, the use of AI-generated depictions is rising, and causing varying levels of confusion.
As such, this is an important step, and you can expect to see more platforms moving to support the NO FAKES Act.
NOTE: Amazon, Google, Meta. and X have all pledged their initial support for the NO FAKES Act.