YouTube’s new AI tool can spot deepfakes of creators:The platform helps fight AI misuse, yet it brings up concerns about privacy and fairness

YouTube has rolled out a new AI-powered likeness detection tool for creators in its Partner Program. The tool helps creators find and report videos that use their face or voice without permission, including deepfakes. The rollout starts with a small group of creators, with more getting access in the coming months. YouTube warns that the tool is still being developed and might sometimes show real videos of the creator, not just AI-generated ones. Here’s how it works After verifying their identity, creators can go to the Content Detection tab in YouTube Studio to see videos flagged as possibly using their likeness. If a video looks suspicious, like an AI-generated deepfake, the creator can request that YouTube remove it. Think of it as “Content ID for your face.” Just like Content ID scans videos for copyrighted material, this feature scans for faces and voices that might have been faked. Why YouTube is doing this YouTube says the feature will help famous personalities manage AI misuse “at scale.” The company first announced it last year and tested it with creators represented by talent agency CAA (Creative Artists Agency). This move follows YouTube’s growing list of AI-related policies. In 2024, it began requiring creators to label AI-generated videos and banned AI-made music that imitates a real artist’s voice.
Essentially, YouTube is expanding its role, not just moderating content, but also protecting people’s identities online. Protecting privacy or collecting faces? On the surface, this seems like a win. Deepfakes have become a nightmare for public figures and creators, with AI-generated lookalikes spreading false or offensive content. But there’s a flip side. To use this protection, you have to give YouTube your facial data and government ID for verification. According to YouTube: Provide a government ID and a brief video of your face for verification… We use this to create face templates that detect videos where your likeness may be altered or made with AI. That raises an important question: why do you need to give up your personal data to protect it? For copyright protection, YouTube doesn’t ask for your ID. So why is it necessary for privacy protection? Only for creators, and that’s a problem Right now, this feature is only for creators in YouTube’s Partner Program. That means if you’re not a YouTube creator, you can’t use it to protect your likeness. This feels unfair. It’s almost like YouTube is saying, “Join our platform to protect your identity.” We’ve seen this before, YouTube’s Content ID system only protects people who are part of the program. Others are left to file manual complaints. What about parody and fair use? Here’s where things get complicated. Let’s say someone makes a funny parody video using an AI version of a famous tech YouTuber. Under copyright law, parody is usually protected. But with this new tool, that tech creator could take that video down by claiming a “likeness violation.” So now, human impersonation is okay, but AI impersonation isn’t. This creates a new kind of censorship, where creators can block AI-based parodies or satire just because their face was generated by AI. Who decides what’s real or fake? YouTube is giving itself the authority to decide which claims are valid and what counts as an unauthorised likeness.
But the platform hasn’t explained how creators can appeal false takedowns or fight misuse of the system.
As we’ve already seen with copyright, YouTube often acts as judge and jury, without much transparency.
So while this tool could help reduce deepfakes, it also gives YouTube more control over identity disputes and less accountability for its decisions.

Leave a Reply

Your email address will not be published. Required fields are marked *

Enquire now

Give us a call or fill in the form below and we will contact you. We endeavor to answer all inquiries within 24 hours on business days.