YouTube is allowing creators to let third-party companies use their videos to train AI models.
The new feature – which is turned off as its default setting – will let creators who would like their content to be scraped for artificial intelligence training to do exactly that. This opt-in approach aligns with global data privacy standards and gives creators full control over their content’s usage.
“We see this as an important first step in supporting creators and helping them realize new value for their YouTube content in the AI era,” a YouTube staff member called Rob wrote in a support post. This development comes as the creator economy reaches an estimated value of $250 billion globally.
“As we gather feedback, we’ll continue to explore features that facilitate new forms of collaboration between creators and third-party companies, including options for authorized methods to access content.” Industry experts predict this could open up new revenue streams for content creators in the rapidly evolving AI landscape.
A video “must be allowed by the creator as well as the applicable rights holders” to be eligible for AI training, which includes owners of content which have been “detected by Content ID”. YouTube’s Content ID system has already identified over 800 million videos for copyright protection since its launch.
“This update does not change our Terms of Service. Accessing creator content in unauthorized ways, such as unauthorized scraping, remains prohibited,” the YouTube staffer added. This clarification addresses concerns about unauthorized AI training, which has become a significant issue in the digital content space.
According to a support page, creators can pick and choose which third party companies can train using their videos. This selective approach gives content creators unprecedented control over their intellectual property in the AI training ecosystem.
According to TechCrunch, the list includes AI21 Labs, Adobe, Amazon, Anthropic, Apple, ByteDance, Cohere, IBM, Meta, Microsoft, Nvidia, OpenAI, Perplexity, Pika Labs, Runway, Stability AI, and xAI. These companies represent a combined market capitalization of over $10 trillion and are at the forefront of AI development.
“These companies were chosen because they’re building generative AI models and are likely sensible choices for a potential partnership with creators,” a YouTube spokesperson added to The Verge. The selected companies have demonstrated strong track records in responsible AI development and data handling.
This initiative comes at a time when AI training data has become increasingly valuable, with the global AI training data market expected to reach $8.5 billion by 2025. Content creators who opt in could potentially benefit from this growing market while maintaining control over their intellectual property.
The move also addresses the ongoing debate about fair compensation for content used in AI training. Industry analysts suggest this could set a new standard for how creative content is valued and monetized in the AI era, potentially influencing similar initiatives across other platforms.
Privacy advocates have praised YouTube‘s opt-in approach, noting that it gives creators explicit control over their content’s use in AI development. This stands in contrast to previous industry practices where content was often scraped without explicit permission.
The feature includes detailed analytics and reporting capabilities, allowing creators to track how their content is being used in AI training. This transparency is expected to build trust between creators and AI companies while providing valuable insights into content utilization.
Educational content creators, in particular, could benefit significantly from this initiative, as their structured, informative content is particularly valuable for training AI models. Some experts predict this could lead to specialized content creation specifically optimized for AI training purposes.
Technical requirements for content eligibility are being carefully monitored to ensure quality and compatibility with various AI training protocols. YouTube has implemented robust verification processes to maintain content integrity throughout the AI training pipeline.
The platform plans to expand these features based on creator feedback and technological advancements, potentially including more partnering companies and additional monetization options in the future. This adaptive approach ensures the system can evolve with the rapidly changing AI landscape while continuing to serve creator interests.