This article was published 1 yearago

Google Youtube

With generative AI advancing by leaps and bounds, we are seeing an accelerated push from social media companies to ensure such content is well detected and identifiable to users. Youtube is joining in those efforts with its latest set of guidelines around generative-AI content. The Google-owned video-sharing platform is gearing up to introduce policies targeting manipulated or synthetic content, particularly that created with the assistance of AI tools. The forthcoming policy update is slated for implementation in the coming year.

One of the pivotal aspects of YouTube’s new policy is the mandatory disclosure for video creators. Specifically, creators will be required to explicitly reveal when they upload content that has undergone manipulation or generation using AI tools, particularly when it convincingly appears realistic. This disclosure mandate is deemed especially crucial in instances where content delves into sensitive topics like elections, ongoing conflicts, public health crises, or involves public officials. Google and YouTube’s move to tighten policies around AI-generated content is a direct response to the escalating challenges associated with misinformation. With the potential of AI (especially generative AI) to fabricate sophisticated yet false narratives, the imperative for transparency and responsible disclosure becomes increasingly apparent. This aligns with broader industry efforts to curb the dissemination of misleading information, especially concerning critical events such as elections and global crises like the pandemic.

Under the revamped policy, creators engaging in digital manipulation or AI-generated content must select an option to display YouTube’s newly introduced warning label in the video’s description panel. Notably, for content addressing sensitive topics, YouTube is taking an extra step by ensuring a more prominent display of the disclosure label directly on the video player. This strategic move aims to enhance transparency and viewer awareness. Moreover, the platform is set to collaborate closely with creators, providing guidance and support to ensure a comprehensive understanding of the new requirements well before the policy’s full implementation.

“There are also some areas where a label alone may not be enough to mitigate the risk of harm, and some synthetic media, regardless of whether it’s labeled, will be removed from our platform if it violates our Community Guidelines. For example, a synthetically created video that shows realistic violence may still be removed if its goal is to shock or disgust viewers. And moving forward, as these new updates roll out, content created by YouTube’s generative AI products and features will be clearly labeled as altered or synthetic,” Jennifer Flannery O’Connor and Emily Moxley, YouTube VPs of product management, wrote in a blog post.

Creators who fall short of consistent disclosure regarding their use of AI-generated content may face penalties ranging from content removal to suspension from the coveted YouTube Partner Program. To add to this, YouTube aims to empower its user base by allowing individuals to request the removal of AI-generated or synthetic content that simulates identifiable persons. This user-driven content moderation approach extends to the music industry, enabling music partners to request the removal of AI-generated music content imitating a specific artist’s voice. Importantly, YouTube emphasizes that not all flagged content will undergo automatic removal, providing leeway for considerations such as parody or satire.

“We’ve heard continuous feedback from our community, including creators, viewers, and artists, about the ways in which emerging technologies could impact them. This is especially true in cases where someone’s face or voice could be digitally generated without their permission or to misrepresent their points of view,” the blog post read.

Speaking more of the distinct challenges posed by AI-generated music, YouTube is introducing specialized controls for the music industry. Music labels and distributors will have the ability to flag content that mimics an artist’s singing or rapping voice, and request the removal of the same. Simultaneously, the platform is actively developing a compensation system to remunerate artists and rightsholders for AI-generated music.