YouTube’s Likeness Detection Tool Takes a Stand Against AI Deepfakes in 2025

likeness detector

YouTube is taking a major step in protecting its creators from the growing threat of AI-generated deepfakes. Starting today, the company is rolling out a powerful new Likeness Detection tool that lets creators identify and remove unauthorized videos that use their face, voice, or likeness without consent.

The tool, which lives inside YouTube Studio’s Content Detection tab, is being made available to members of the YouTube Partner Program. It marks one of the platform’s biggest moves yet in the ongoing battle between AI-generated content and human identity.


What Is YouTube’s Likeness Detection Tool?

At its core, the Likeness Detection tool functions as an AI-powered shield for creators. It scans YouTube for videos that might feature synthetic or altered versions of a creator’s face or voice—essentially, AI-generated deepfakes. Once potential matches are found, the creator gets notified and can review the flagged videos directly in YouTube Studio.

If a video looks like an unauthorized AI imitation, the creator can quickly submit a takedown request for review. From there, YouTube will investigate and, if verified, remove the offending content.

The process is similar to Content ID, which detects copyrighted material like music and video clips. However, this tool is designed for identity protection—specifically targeting the misuse of a creator’s physical likeness in the age of generative AI.


How Creators Can Access It

To use Likeness Detection, creators must first verify their identity. This involves submitting a government-approved ID and recording a short video selfie to confirm they’re the rightful owner of their likeness.

Once verified, YouTube’s system starts scanning across the platform for videos that might replicate the creator’s appearance or voice using AI. These flagged videos appear in a detailed dashboard, showing the title, channel name, number of views, and snippets of the video for context.

The onboarding process may seem extensive, but it’s designed for security. YouTube wants to ensure that only legitimate creators can use the tool—preventing impersonators or scammers from falsely claiming someone else’s likeness.

Creators also have control over their data. If they choose to disable the tool, YouTube will stop scanning for deepfakes within 24 hours, offering flexibility and transparency.


The Fight Against Deepfakes

AI-generated deepfakes have become one of the biggest threats to online authenticity. From fake celebrity endorsements to political misinformation, manipulated media is blurring the lines between reality and fabrication.

YouTube’s Likeness Detection tool aims to tackle this head-on. It not only protects creators’ identities but also preserves viewer trust—ensuring that audiences know when a video features genuine content.

YouTube is also complementing this feature with stricter policies. Earlier this year, the company required creators to label AI-generated or altered content and announced new rules for AI-generated music that mimics an artist’s voice. Together, these policies create a more transparent and ethical ecosystem for content creation.


A Broader Push Toward Responsible AI

This isn’t YouTube’s first step in addressing AI ethics. The Likeness Detection tool was first announced in 2024 and tested in partnership with Creative Artists Agency (CAA), where select public figures gained early access. YouTube described it as “early-stage technology designed to identify and manage AI-generated content featuring their likeness.”

Now, with the public rollout, YouTube is extending that protection to its vast creator community—starting with Partner Program members and expanding further by early 2026.

The company acknowledges that the tool is still in development, meaning it might occasionally flag a creator’s own videos. But as the AI model improves, accuracy and reliability are expected to increase significantly.


Why It Matters

For creators, Likeness Detection is more than a technical feature—it’s about digital self-defense. As AI becomes more advanced, anyone’s face or voice can be replicated convincingly in a matter of minutes. That creates real risks, from reputational damage to misinformation and identity theft.

YouTube’s approach gives creators something invaluable: control. By allowing them to see where and how their likeness is being used, the company empowers individuals to take action before harm is done.

This initiative also sets a precedent for how social media platforms can balance innovation with responsibility. As AI-generated content grows, other platforms may soon follow YouTube’s lead in offering identity protection tools.


Final Thoughts

The introduction of YouTube’s Likeness Detection tool is a crucial moment for online creators and digital ethics alike. It shows that the platform is not only embracing AI innovation but also taking concrete steps to protect the people behind the content.

While the technology is still evolving, it sends a strong message: in the age of AI, authenticity still matters. And for millions of YouTubers, that’s a reassuring promise.

Reddit Sues Perplexity: The AI Data War In 2025

reddit sues perplexity

In a dramatic turn that exposes the growing friction between online platforms and artificial intelligence companies, Reddit sues Perplexity, accusing the AI-powered search startup of illegally scraping user posts to train its language models. Filed in a New York federal court, the lawsuit names Perplexity, along with three alleged partners—Lithuanian data firm Oxylabs, Texas-based SerpApi, and former Russian botnet AWMProxy—for “bypassing technical barriers” and disguising their data collection practices.

According to Reddit, these entities effectively built a shadow system that siphoned millions of user posts, turning community-driven discussions into unlicensed AI training data. Ben Lee, Reddit’s Chief Legal Officer, went as far as to call it an “industrial-scale data laundering operation,” accusing Perplexity and its partners of stealing user-generated content to fuel AI development without permission or compensation.


What Sparked Reddit’s Lawsuit Against Perplexity

Reddit’s stance isn’t against AI innovation—it’s against AI companies exploiting its ecosystem without consent. The platform has officially licensed its data to companies like OpenAI and Google, both of which signed structured agreements for ethical data usage. These deals not only bring revenue to Reddit but also establish a transparent framework where content creators indirectly benefit when their posts help train AI systems.

However, in the Reddit sues Perplexity case, things took a darker turn. The lawsuit alleges that Perplexity, with the help of its data vendors, used automated scrapers and proxy services to extract Reddit data directly from Google search results, bypassing rate limits and user protections.

To prove it, Reddit engineers reportedly created a “test post” designed only for Google indexing—never meant for real users. Within hours, Perplexity allegedly surfaced that exact post in its AI responses, confirming unauthorized scraping. Even after Reddit sent a cease-and-desist notice, mentions of Reddit content on Perplexity’s platform allegedly increased fortyfold, further strengthening the case.


How the Defendants Are Responding

Perplexity has firmly denied all allegations, labeling Reddit’s lawsuit as a form of “corporate extortion”. The company argues that it merely summarizes publicly available data, which it believes doesn’t fall under copyright restriction. Similarly, SerpApi and Oxylabs have pushed back, claiming their scraping methods operate within legal bounds, as the information is already accessible on the open web.

However, Reddit’s counterargument is straightforward: public visibility doesn’t mean public ownership. The platform insists that large-scale scraping without permission undermines user trust, violates API terms, and directly affects its growing AI licensing business, which reportedly contributes nearly 10% of Reddit’s total revenue.

As for AWMProxy, the company has not released any official statement, leaving its alleged involvement unclear.


Why the Reddit Sues Perplexity Case Matters

This lawsuit isn’t just a corporate dispute—it’s a defining moment for AI ethics and data ownership. As AI models require massive amounts of human-generated content to function, companies like Reddit are beginning to realize the real monetary and moral value of their platforms’ communities.

By taking a public stand, Reddit joins a growing list of content publishers challenging the AI industry’s “free data” culture. Similar lawsuits have already emerged from The New York Times and Simon & Schuster, all arguing that AI companies can’t build billion-dollar technologies using unpaid, user-created content.

The Reddit sues Perplexity case also raises difficult questions:

  • Where does fair use end and data theft begin?

  • Should AI companies pay platforms and creators for the content that powers their intelligence?

  • And how do we protect the open web without stifling AI innovation?


Looking Ahead: Setting the Tone for AI Accountability

As the lawsuit unfolds in U.S. federal court, its outcome could set a major precedent for AI governance worldwide. If Reddit succeeds, AI developers might be forced to adopt stricter data sourcing standards, sign formal licensing deals, or even compensate platforms retroactively for past scraping.

For Perplexity, this case could define its credibility as an AI-driven search engine in a market increasingly focused on transparency and ethical data use. For Reddit, it’s an opportunity to reinforce that its users’ voices—and their content—hold real value that deserves protection.

At its core, Reddit sues Perplexity isn’t just about one AI model or one platform. It’s about who owns the internet’s collective intelligence. In a world where every post, comment, and meme could train an AI, this battle signals that the era of unregulated data scraping might finally be coming to an end.