Alphabet Google
Credits: Wikimedia Commons

The proliferation of deepfakes, hyper-realistic synthetic media, has posed significant challenges to online platforms. Google’s Search, which is obviously the place where most of these deepfakes end up being visible, has now announced a suite of new online safety features designed to tackle the proliferation of explicit deepfakes. These updates aim to make it easier for users to remove non-consensual fake content and prevent such images from appearing prominently in Google Search results.

“Beyond combating explicit deepfakes, Google is expanding its “About this image” feature to more platforms, including Circle to Search and Google Lens. This tool provides users with contextual information about images, such as how other websites use and describe them, available metadata, and identification of AI-generated images with specific watermarks. This feature is now accessible in 40 languages, enhancing users’ ability to verify online information and combat misinformation,” the company noted in a blog post on the matter.

One of the significant changes introduced by Google is an improved process for removing explicit deepfakes from search results. Previously, users could request the removal of non-consensual explicit images depicting them. Now, the company has implemented algorithmic changes to lower the ranking of explicit fake content for queries with a high probability of generating such results. When a request for removal is successful, Google’s systems will also aim to filter out all explicit results on similar searches about the individual. This ensures that not only is the offending image removed, but any duplicates are also eradicated from Google’s index.

Emma Higham, a product manager at Google, spoke upon the importance of these protections, stating, “These efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future.” Beyond combating explicit deepfakes, Google is expanding its “About this image” feature to more platforms, including Circle to Search and Google Lens. This tool provides users with contextual information about images, such as how other websites use and describe them, available metadata, and identification of AI-generated images with specific watermarks. This feature is now accessible in 40 languages, enhancing users’ ability to verify online information and combat misinformation.

Google’s previous updates have already shown substantial progress in reducing exposure to explicit deepfake content. The company reports that these changes have decreased the appearance of explicit image results in relevant queries by over 70% this year. However, distinguishing between real explicit content and deepfakes remains a complex technical challenge, and Google is working on methods to differentiate between consensual explicit material, such as an actor’s nude scenes, and non-consensual deepfakes.

In addition to the enhanced removal process, Google is updating its search algorithms to better handle explicit deepfakes. Searches that intentionally seek deepfake images of real people will now surface “high-quality, non-explicit content” instead of explicit material. This change is part of a broader effort to ensure that search results prioritize legitimate, informative content over harmful or manipulated images. For example, if someone searches for explicit AI-generated images of a celebrity, the results will now include relevant news stories or other non-explicit content instead.

Furthermore, Google is taking steps to demote websites that frequently host explicit fake imagery. Websites that receive a substantial number of removal requests for such content will be downgraded in search rankings. This demotion will indicate that the site is not a high-quality source, helping to reduce the visibility of harmful content.