YouTube’s New Shorts Time Limit Feature Aims to Curb Doomscrolling

Shorts Time Limit

YouTube has officially started rolling out a new Shorts Time Limit Feature — a digital well-being update designed to help users put a cap on endless scrolling. With short-form content now dominating screen time for both adults and teens, this feature gives viewers more control over how long they spend inside the addictive Shorts feed before the app pauses playback.

This change comes as platforms face mounting pressure to introduce better guardrails around youth screen time and compulsive scrolling patterns.


Why YouTube Introduced the Shorts Time Limit Feature

Short-form video is engineered for quick dopamine hits — “just one more scroll” often turns into 30 minutes or more. YouTube’s own research shows that a huge portion of its youth audience consumes Shorts for long, unplanned sessions, which is why this Shorts Time Limit Feature specifically targets binge-watching behavior.

According to the company, this is not just another reminder like “take a break.” Instead, once a user hits their chosen daily limit, the Shorts feed is paused and a notification appears on screen. Unlike older wellness pop-ups, this one interrupts the feed and makes users consciously decide whether scrolling should continue.


How the Feature Works

The  Time Limit Feature is currently rolling out on mobile devices:

  1. Open the YouTube app

  2. Tap your profile icon → Settings

  3. Go to General

  4. Set your daily scrolling limit for Shorts

Once the limit is reached, the Shorts feed pauses automatically, reinforcing digital boundaries in a more deliberate and effective way. Users can dismiss the alert for now, but YouTube says parental controls are coming soon.


Parental Controls Coming Soon

Later this year, YouTube will integrate the Shorts Time Limit Feature into supervised accounts. For kids and teens, the time-limit alert will become non-dismissible — meaning once the limit is reached, scrolling stops for the day unless a parent changes the setting.

This update responds to rising concerns from parents, regulators, and psychologists over the link between excessive short-form content consumption and anxiety, sleep disruption, and reduced attention span.


Part of a Bigger Push for Digital Wellbeing

YouTube already offers tools like:

  • “Take a Break” reminders

  • Bedtime reminders

  • Watch history controls
    But the Shorts Time Limit Feature is the strongest intervention yet because it doesn’t just suggest a break — it enforces one.

It also aligns with the ongoing legal scrutiny faced by YouTube, Meta, TikTok, and Snapchat over youth safety and time-use practices.


Why This Matters

The timing of this rollout is significant. In 2024, a Pew survey found that:

  • 73% of U.S. teens use YouTube daily

  • 15% said they are “almost constantly” on the app

With Shorts now an enormous part of YouTube’s growth strategy, the Shorts Time Limit Feature attempts to strike a balance — keep the ecosystem thriving while protecting users from compulsive overuse.


Final Takeaway

The Shorts Time Limit Feature is a meaningful step toward healthier social media habits. It won’t eliminate doomscrolling altogether, but it puts agency back in the hands of users — and soon, in the hands of parents for minors.

YouTube is clearly signaling that its future growth must come with responsibility. And in a world where attention is the new currency, giving people tools to manage theirs is a smart move — both ethically and strategically.

YouTube’s Likeness Detection Tool Takes a Stand Against AI Deepfakes in 2025

likeness detector

YouTube is taking a major step in protecting its creators from the growing threat of AI-generated deepfakes. Starting today, the company is rolling out a powerful new Likeness Detection tool that lets creators identify and remove unauthorized videos that use their face, voice, or likeness without consent.

The tool, which lives inside YouTube Studio’s Content Detection tab, is being made available to members of the YouTube Partner Program. It marks one of the platform’s biggest moves yet in the ongoing battle between AI-generated content and human identity.


What Is YouTube’s Likeness Detection Tool?

At its core, the Likeness Detection tool functions as an AI-powered shield for creators. It scans YouTube for videos that might feature synthetic or altered versions of a creator’s face or voice—essentially, AI-generated deepfakes. Once potential matches are found, the creator gets notified and can review the flagged videos directly in YouTube Studio.

If a video looks like an unauthorized AI imitation, the creator can quickly submit a takedown request for review. From there, YouTube will investigate and, if verified, remove the offending content.

The process is similar to Content ID, which detects copyrighted material like music and video clips. However, this tool is designed for identity protection—specifically targeting the misuse of a creator’s physical likeness in the age of generative AI.


How Creators Can Access It

To use Likeness Detection, creators must first verify their identity. This involves submitting a government-approved ID and recording a short video selfie to confirm they’re the rightful owner of their likeness.

Once verified, YouTube’s system starts scanning across the platform for videos that might replicate the creator’s appearance or voice using AI. These flagged videos appear in a detailed dashboard, showing the title, channel name, number of views, and snippets of the video for context.

The onboarding process may seem extensive, but it’s designed for security. YouTube wants to ensure that only legitimate creators can use the tool—preventing impersonators or scammers from falsely claiming someone else’s likeness.

Creators also have control over their data. If they choose to disable the tool, YouTube will stop scanning for deepfakes within 24 hours, offering flexibility and transparency.


The Fight Against Deepfakes

AI-generated deepfakes have become one of the biggest threats to online authenticity. From fake celebrity endorsements to political misinformation, manipulated media is blurring the lines between reality and fabrication.

YouTube’s Likeness Detection tool aims to tackle this head-on. It not only protects creators’ identities but also preserves viewer trust—ensuring that audiences know when a video features genuine content.

YouTube is also complementing this feature with stricter policies. Earlier this year, the company required creators to label AI-generated or altered content and announced new rules for AI-generated music that mimics an artist’s voice. Together, these policies create a more transparent and ethical ecosystem for content creation.


A Broader Push Toward Responsible AI

This isn’t YouTube’s first step in addressing AI ethics. The Likeness Detection tool was first announced in 2024 and tested in partnership with Creative Artists Agency (CAA), where select public figures gained early access. YouTube described it as “early-stage technology designed to identify and manage AI-generated content featuring their likeness.”

Now, with the public rollout, YouTube is extending that protection to its vast creator community—starting with Partner Program members and expanding further by early 2026.

The company acknowledges that the tool is still in development, meaning it might occasionally flag a creator’s own videos. But as the AI model improves, accuracy and reliability are expected to increase significantly.


Why It Matters

For creators, Likeness Detection is more than a technical feature—it’s about digital self-defense. As AI becomes more advanced, anyone’s face or voice can be replicated convincingly in a matter of minutes. That creates real risks, from reputational damage to misinformation and identity theft.

YouTube’s approach gives creators something invaluable: control. By allowing them to see where and how their likeness is being used, the company empowers individuals to take action before harm is done.

This initiative also sets a precedent for how social media platforms can balance innovation with responsibility. As AI-generated content grows, other platforms may soon follow YouTube’s lead in offering identity protection tools.


Final Thoughts

The introduction of YouTube’s Likeness Detection tool is a crucial moment for online creators and digital ethics alike. It shows that the platform is not only embracing AI innovation but also taking concrete steps to protect the people behind the content.

While the technology is still evolving, it sends a strong message: in the age of AI, authenticity still matters. And for millions of YouTubers, that’s a reassuring promise.