In its latest move to combat election misinformation, Google announced on Monday a new mandatory requirement for advertisers to disclose election ads that use digitally altered content. Effective from July 2024, the new policy mandates that advertisers disclose any use of “synthetic or digitally altered content” in their election ads.
The updated policy mandates that advertisers must now select a checkbox labeled “altered or synthetic content” when setting up their ad campaigns. This checkbox signifies that the ad contains digitally manipulated content, depicting real or realistic-looking people or events. This encompasses synthetic material that appears to show a person saying or doing something they did not, alters footage of a real event, or generates a realistic portrayal of an event that never occurred. Advertisers failing to comply with these requirements will receive a minimum seven-day warning before facing account suspension. This move is designed to prevent the spread of misinformation and ensure that users are well-informed about the nature of the political ads they encounter.
Google’s newest policy update will generate in-ad disclosures for various formats, including feeds and shorts on mobile phones, and in-stream ads on computers and television. These disclosures are intended to be prominently visible to users, ensuring they are aware of the synthetic nature of the content. For other ad formats, advertisers are required to provide a “prominent disclosure” that is clear and conspicuous. This tailored approach allows for context-specific disclosure language as well.
The updated policy from Google specifies that ads requiring disclosure include those with content that inauthentically depicts real or realistic-looking people or events.
For formats where Google does not auto-generate disclosures, advertisers must ensure that their disclosures are placed in a location likely to be noticed by users. Acceptable disclosure language will vary based on the specific context of the ad, providing flexibility while maintaining transparency. Examples of such language include statements like “altered or synthetic content,” “this audio was computer-generated,” or “this image does not depict real events.”
This development comes at a time when the rapid advancement of generative AI, capable of creating text, images, and videos in response to prompts, has introduced new challenges for content platforms. Deepfakes, which involve manipulating footage to misrepresent individuals or events, have blurred the lines between reality and fiction. Such technologies have been used to create fake videos that can sway public opinion and influence election outcomes. For instance, during India’s general election, AI-generated videos featuring Bollywood actors criticizing Prime Minister Narendra Modi went viral.
Google’s move is part of a broader trend among tech companies to enhance transparency in political advertising. Meta, for example, implemented a similar disclosure policy last year, requiring advertisers to reveal the use of AI or digital tools in creating political, social, or election-related ads on Facebook and Instagram. OpenAI also disrupted several covert influence operations that attempted to use its AI models for deceptive activities across the internet.