Google has unveiled a comprehensive plan to safeguard its platforms and services as part of a broader effort to curb the potential spread of misinformation during upcoming elections. This decision to impose limitations on its generative AI tools comes in response to the growing concerns surrounding the misuse of AI in the creation of election-related content, along with its potential use for misinformation.
“In 2024, we will continue our efforts to safeguard our platforms, help people make informed decisions, surface high-quality information to voters, and equip campaigns with best-in-class security. We’ll do this work with an increased focus on the role artificial intelligence (AI) might play. Like any emerging technology, AI presents new opportunities as well as challenges. In particular, our AI models will enhance our abuse-fighting efforts, including our ability to enforce our policies at scale,” the blog post read.
The company, in an official blog post, noted that it will restrict the types of election-related queries that Bard (its AI-powered chatbot) and Search Generative Experience (SGE) can return responses for. These restrictions are set to be enforced by early next year. “We’re also focused on taking a principled and responsible approach to introducing generative AI products – including Search Generative Experience (SGE) and Bard – where we’ve prioritized testing for safety risks ranging from cybersecurity vulnerabilities to misinformation and fairness,” Susan Jasper, VP, Trust & Safety Solutions, wrote in the blog post.
While the company refrains from specifying the types of queries subject to restrictions, Google emphasizes a cautious approach. This approach is geared towards minimizing the potential misuse of generative AI tools. Apart from this, Google will provide authoritative information from state and local election offices at the top of its Search results when users search for common topics like how and where to vote. Polling locations will also be highlighted on Google Maps, while the company will also provide easy-to-use directions.
In addition to restricting AI tools, Google plans to enforce specific advertising disclosures and introduce content labels for certain types of content. The intention is to establish a more transparent digital environment, providing users with crucial information to differentiate between authentic and potentially misleading content. As part of the broader initiative, Google will also mandate YouTube creators to disclose the use of “altered or synthetic content” in their videos. While the enforcement mechanisms are not explicitly outlined, this step aligns with Google’s existing ad transparency policies, requiring disclosures for political advertisements featuring altered or synthetic content. Furthermore, SGE’s “About this Result” will provide context, while “About this image” in Search will enable users in assessing the credibility and context of images found online.