Due to the recent flood of misleading stories, especially during the U.S presidential elections, Google has decided to bolster its efforts to block offensive search results. The search giant is looking to better its AIs understanding of the said content and has, thus, updated the guidelines to bring its contractors into the loop.

As mentioned in the updated guidelines, which have been thoroughly examined by Search Engine Land, Google is now topping up the responsibility of over 10,000 of its contractors to more closely assess the search results quality. This means that these contractors are now required to help the company’s search algorithm better recognize upsetting or factually incorrect content. The regular task of its contractors is to conduct actual searches and rate pages appearing in its top search results. It is due to their contribution that Google has been serving as the best search engine among a herd of competitors.

But, they are now being handed down the task of making the search results safe and free of misleading content. The contractors have been extended a new “Upsetting-Offensive” flag, which Google should be applied to content that promotes hate or violence, based on race, gender, caste, and nationality among others. It also cracks down on web pages containing racial slurs, graphic violence, info on harmful activities, or other content that local communities might find offensive.

Talking about this initiative, Paul Haahr, one of Google’s senior engineers of the search team said,

We’re explicitly avoiding the term fake news because we think it is too vague, Demonstrably inaccurate information, however, we want to target. We will see how some of this works out. I’ll be honest. We’re learning as we go.

To make it simpler for contractors to understand what qualifies as ‘upsetting-offensive’ content, Google has also shared an example in the update guideline document. On searching about “holocaust history,” for instance, the contractors are provided with two different set of results. In the screenshot attached underneath, the first search result is a white supremacist website Stormfront that denies the Holocaust, something which relates to anti-Semitism and is considered a crime in over 20 countries. Further, Google instructs them to rate this page as ‘upsetting-offensive’ because of this relation, which might hurt many people’s sentiments.

Credits: SEL

The second search result, on the other hand, comes from The History Channel and doesn’t require the ‘upsetting-offensive’ flag. Now, you might question that the said result is also related to a sensitive and unsettling topic, then why not flag it? Well, Google states that it might contain Holocaust-related subject matter but it is a “factually accurate source of historical information.” Also, this doesn’t deny the grave instances that happened during World War II under Hitler’s regime. In short, this website doesn’t promote hate in any way.

Further, Search Engine Land mentions that flagging a web page with the ‘upsetting-offensive’ tag wouldn’t immediately downgrade its position in the search results. These inputs from Google’s contractors would instead be used as pointers by the company’s engineering team to update the AI-powered search algorithm. By steadily teaching the AI about the said content, it will eventually come to remark offensive and misleading web pages on its own.

This doesn’t mean Google is completely removing the white supremacist website from its search results. Under its new guideline, the same will not be surfaced in the top results but can still be discovered directly by typing in its name. The new flagging practice allows the search engine to improve upon displaying the search results, so as to avoid mishaps which occurred during elections. The content flagging exercise has already been started by Google’s contractors and it’s pleased with the progress until date.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.