The phenomenon that started off as gaming the system with low-quality content has now become a nightmare for Internet giants. It has taken the shape of false and misleading content and is defined by its very own term — fake news. Google and Facebook are two of the most prominent names who were called out for propagating false news stories during the U.S presidential elections.
With regards to the same, Google is bearing responsibility and has today announced some structural changes to its renowned search engine. The changes being implemented today onwards would enable the search giant to weed out content which has contributed to the spread of blatantly misleading, low quality, offensive or downright false information.
The Mountain View-based search giant has always implemented new, updated algorithms to grapple with individuals looking to toy with the search results to rank higher. The problem has changed its shape a little but Google is still committed towards its long-term efforts of improving upon the info displayed in any user’s search results. Speaking of the quirks in an official blog post, Ben Gomes, Google’s VP of Engineering for Search said,
While we may not always get it right, we’re making good progress in tackling the problem. But in order to have long-term and impactful changes, more structural changes in Search are needed.
Now, this is being made possible by making definitive changes to the underlying search ranking algorithm – but that’s not everything. Google is also providing users with a set of handy tools that’ll enable them to provide feedback about the info and results displayed. It is now looking to maximize participation of the community to enhance the overall experience and kicking out those trying to spread malice through wrongful means — here, the content or ads.
As you might already be aware, Google has indexed more than hundreds of billions of pages currently live on the interwebs. Thus, it has discovered that some of their daily search traffic (about 0.25 percent) does surface offensive or clearly misleading content. The search giant has acknowledged that it surely is a significantly huge number and it isn’t the content users have made their way to the platform in search of. Thus, the first and foremost change involves improvements to the search ranking algorithms.
Google is adopting a process similar to what Facebook has tried earlier to curate the news sources being surfaced in the ‘Trending’ column on the right. Proposing updates to its Search Quality Rater guidelines, the search giant is now employing human editors to assess the quality of content being surfaced to the users searching on the platform. It doesn’t want to be caught in another hubbub surrounding the content being displayed in the results.
These individuals provide Google with feedback about the various experiments it conducts to better search quality. They work with a subset of problematic queries to determine the areas where search results need improvement — do not directly affect page rankings. It has recently also updated the guidelines to help editors weed out (or flag) more low-quality content which includes any misleading information, unexpected offensive results, hoaxes and unsupported conspiracy theories. Such low-quality content will be demoted.
Further, for those unaware, Google (as well as YouTube) had recently been caught in hullaballoo surrounding the search results that were surfaced with regards to the World War Two holocaust. The top results surfaced by the search engine were misleading and only questioned the events that define history. It was from a neo-Nazi website called Stormfront, spotted back in December last year.
Thus, the second iteration is being made to the search ranking signals — the inputs which define how and what results will be shown for particular queries. These signals have now been adjusted (not explained how?) but Google mentions that it’ll now enable them to surface more authoritative pages and demote low-quality content. It will not only help them maintain the freshness of the content but also improve upon the number of times your search queries appear on the page.
These under the hood changes to the search engine are being complemented with some feature introductions on the surface as well. Google has now decided to bring you in the loop and offer feedback about the content being surfaced algorithmically on the website. It has added feedback forms to the Autocomplete and Featured Snippets section — to gather info on false and inappropriate content appearing in any of these results. Thus, this will enable them to fix the algorithms and the blog post further adds,
Starting today, we’re making it much easier for people to directly flag content that appears in both Autocomplete predictions and Featured Snippets. These new feedback mechanisms include clearly labeled categories so you can inform us directly if you find sensitive or unhelpful content.
Additionally, Google is also planning to provide the users with greater transparency into all of their products. This will enable them to learn answers to the questions of why Google Assistant (even on Google Home) were relaying some shocking or offensive predictions. Their internal policies are now listed in the Help Center, with info on ‘How Search Works’ right here. Google has also launched other initiatives such as news verification platform CrossCheck and has expanded their ‘fact check‘ tools to Search and news internationally in all available languages.