Generative AI and AI-powered tools have blurred the lines between authentic photography and altered content, leading to a rise in fears about deepfakes and the dangers they possess. Google is addressing this issue by bringing new labels to its photos. These labels will roll out from next week, and users will be able to see detailed metadata within the app that reveal whether AI tools and features were used to modify the image.

“We often make edits to our photos to make them pop. Sometimes, that means making a simple change to a photo, like cropping it. Other times, it might involve more complex changes like removing unwanted distractions or objects, perfecting the lighting or even creating a new composition. These used to be time-consuming complex tasks, but AI has changed that — powering editing tools like Magic Editor and Magic Eraser in Google Photos,” John Fisher, Engineering Director at Google Photos and Google One, said in a blog post on the matter, adding, “To further improve transparency, we’re making it easier to see when AI edits have been used in Google Photos. Starting next week, Google Photos will note when a photo has been edited with Google AI right in the Photos app.”

Users will find the label embedded within the “AI Info” section available within the Google Photos app. This section is available through the metadata of the image, and will indicate whether the likes of Magic Eraser or Magic Editor were used. Magic Eraser ‘erases’ unwanted elements in photos, while Magic Editor allows for more subtle modifications to the image.

What’s interesting about this new feature is that it is not limited to tools that are powered by generative AI. For example, if users refer to tools like “Best Take” to modify an image – Best Take brings together elements from multiple images to create an image – then the metadata will highlight that as well. Thankfully, these labels (while providing clarity) will not disrupt the visual content itself.

It would have been better for Google to provide a watermark or something to warn users about AI-modified images, but something is better than nothing, especially at a time when AI-powered photo editing is rising in popularity, and it is becoming increasingly difficult to tell whether an image is original or digitally altered. With this, users will not be easily fooled. and could act as a deterrent against deepfakes – the use of AI to create hyper-realistic, fabricated images – as well.

Furthermore, watermarks would have interfered with the viewing experience. It is not a perfect solution, though, since metadata can be stripped from an image when it is downloaded, shared, or re-uploaded across platforms. Not to mention the fact that metadata is something that users need to actively search for – it is usually hidden from view, and many users may overlook it.