YouTube wants content creators to label content that has been altered.
With a rise in the use of generative AI, the video sharing site wants it to be clear when someone has taken real images or videos of people or places and changed appearances or mimicked their voice.
“Using the likeness of a realistic person; digitally altering content to replace the face of one individual with another’s or synthetically generating a person’s voice to narrate a video; altering footage of real events or places, such as making it appear as if a real building caught fire or altering a real cityscape to make it appear different than in reality; generating realistic scenes, [such as] showing a realistic depiction of fictional major events, like a tornado moving toward a real town,” the platform stated as examples.
“Generative AI is transforming the ways creators express themselves — from storyboarding ideas to experimenting with tools that enhance the creative process. But viewers increasingly want more transparency about whether the content they’re seeing is altered or synthetic,” YouTube announced.
This does not include content “that is clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance.”
The labels will start rolling out in the coming weeks and there will be “enforcement measures for creators who consistently choose not to disclose this information.”