YouTube is making changes to discourage people from getting stuck in negative “wormholes” by repeatedly watching harmful videos. This initiative comes as part of the platform’s broader content moderation strategy, which has seen significant updates over the past year to improve user experience and safety.
The video-sharing platform looked into insights from young people’s development last year and shared that some kinds of videos aren’t harmful or offensive when seen once, but become more damaging when viewed repeatedly. Research indicates that prolonged exposure to certain content types can impact mental health, with studies showing teens reporting negative effects from algorithm-driven content loops.
Now, the Google-owned platform has developed ways of finding categories of videos that could be deemed problematic and is testing ways to spread out views of the videos. The new system uses advanced machine learning algorithms that can identify potentially harmful content patterns, according to internal testing.
Initially, the changes were made to two specific types of videos: social aggression and those that compare idealized physical features or body types. These categories have been identified by health experts as particularly influential on young viewers’ self-perception, with repeated exposure linked to increased rates of body dissatisfaction and anxiety in adolescents.
But now, the changes are being rolled out to three other categories which include poor financial advice that takes advantage of young people who don’t have strong knowledge of the topic. Financial literacy rates among teens remain concerningly low, with many high school students lacking basic financial knowledge.
These include clips offering “get rich quick” schemes suggesting that youngsters buy a product such as lottery tickets to get money, taking advantage of those with poor financial education. Such content has proliferated across social media platforms, with younger viewers being the most vulnerable demographic according to consumer protection agencies.
The platform is also cracking down on videos that “portray delinquency or negative behaviors” – such as cheating and lying or taking part in pranks – and clips that portray “teens as cruel and malicious or encourages them to ridicule others”, such as making fun of people. Behavioral scientists have documented that repeated exposure to such content can normalize antisocial behaviors among impressionable viewers.
The platform is also adding new features that will remind people to go to bed and take a break if they are watching for a long time. These digital wellbeing tools will join YouTube’s existing suite of user health features, which have already helped reduce late-night viewing among teenage users.
YouTube‘s latest policy changes reflect growing concerns about algorithm-driven content consumption and its potential impacts on mental health. The platform processes enormous amounts of uploaded content every minute, making automated content moderation essential for addressing problematic material at scale.
Mental health advocates have praised these changes as a step in the right direction, though many emphasize that more comprehensive approaches involving parents, educators, and technology companies working together will be necessary for meaningful impact.
Industry experts note that the challenge lies in balancing content moderation with creator freedom. YouTube’s approach of limiting repeated exposure rather than outright removing borderline content represents a nuanced strategy that aims to preserve expression while protecting vulnerable users.
Internal data from YouTube suggests that these intervention methods have already reduced negative content consumption cycles in test groups, with particularly significant reductions among younger users.
The implementation of these new policies will be gradual, with YouTube planning to monitor their effectiveness and make adjustments based on user feedback and impact data. Content creators will also receive updated guidelines to better understand how these changes might affect their videos’ distribution.
Digital wellness researchers emphasize that while platform-level interventions are important, developing media literacy and critical thinking skills remains crucial for young users navigating online spaces. Educational initiatives focused on helping teens recognize potentially harmful content patterns have shown promising results in complementing technical solutions.
YouTube plans to share effectiveness data from these interventions with other social media companies as part of industry-wide efforts to create safer online environments, particularly for younger users who may be more susceptible to algorithmic content suggestions.