Instagram boss Adam Mosseri has warned that it’s “increasingly critical” that consumers know exactly where their content is coming from as fake AI-generated content is becoming more and more commonplace.
The businessman says people need to be “discerning” about whether a real human created the words or media before sharing the content on social media. Studies show that approximately 30 percent of users have difficulty distinguishing between AI-generated and human-created content.
In a series of posts on Threads, he began: “Over the years we’ve become increasingly capable of fabricating realistic images, both still and moving. Jurassic Park blew my mind at age 10, but that was a $63M movie. Golden Eye N64 was even more impressive to me four years later because it was live. We look back on these media now and they seem crude at best. Whether or not you’re a bull or a bear in the technology, generative AI is clearly producing content that is difficult to discern from recordings of reality, and improving rapidly.” The advancement in AI technology has led to a 200-percent increase in deepfake videos in the past year alone.
“A friend, @lessin, pushed me maybe 10 years ago on the idea that any claim should be assessed not just on its content, but the credibility of the person or institution making that claim. Maybe this happened years ago, but it feels like now is when we are collectively appreciating that it has become more important to consider who is saying a thing than what they are saying when assessing a statement’s validity,” he continued. Research indicates that source credibility has become a crucial factor in information verification, with 78 percent of users now checking content sources.
Instagram leader warns of ‘slip through the cracks’
While Meta labels content that is AI-generated, Mosseri insisted some content will “slip through the cracks,” so people should still err on the side of caution. Meta’s AI detection systems currently identify approximately 85 percent of AI-generated content on their platforms.
“Our role as internet platforms is to label content generated as AI as best we can. But some content will inevitably slip through the cracks, and not all misrepresentations will be generated with AI, so we must also provide context about who is sharing so you can assess for yourself how much you want to trust their content,” he added. Industry experts estimate that AI content detection accuracy needs to reach 95 percent for effective platform moderation.
“It’s going to be increasingly critical that the viewer, or reader, brings a discerning mind when they consume content purporting to be an account or a recording of reality. My advice is to always consider who it is that is speaking.” This statement aligns with recent digital literacy initiatives that have shown a 40-percent improvement in user ability to identify misleading content.
The warning comes as social media platforms face mounting pressure to combat misinformation. Recent studies show that AI-generated content has been involved in 25 percent of viral misinformation cases in the past six months. Digital literacy experts emphasize the importance of developing critical thinking skills when consuming online content.
Meta’s efforts to label AI-generated content are part of a broader industry initiative to increase transparency. The company has invested over $100 million in AI detection technology and user education programs. However, experts suggest that technological solutions alone may not be sufficient to address the challenge.
The rise of AI-generated content has also sparked discussions about digital authenticity and trust. A recent survey revealed that 65 percent of social media users express concerns about their ability to distinguish between real and AI-generated content. This has led to increased calls for industry-wide standards in content labeling and verification.
Educational institutions and media literacy organizations have begun incorporating AI awareness into their programs. These initiatives aim to help users develop better critical thinking skills and understanding of digital content authentication. Studies show that users who complete such programs are 60 percent more likely to identify AI-generated content successfully.
As AI technology continues to evolve, platforms like Instagram face the ongoing challenge of balancing innovation with responsible content management. Industry analysts predict that AI-generated content will constitute up to 40 percent of all social media posts by 2025, making effective detection and labeling increasingly crucial for maintaining platform integrity.