Google and Fake News: Changes to Search


“Today, in a world where tens of thousands of pages are coming online every minute of every day, there are new ways that people try to game the system. The most high profile of these issues is the phenomenon of “fake news”. . . While this problem is different from issues in the past, our goal remains the same—to provide people with access to relevant information from the most reliable sources available. And while we may not always get it right, we’re making good progress in tackling the problem.”
– Ben Gomes, VP of search engineering at Google
In 2016, the term “fake news” became ubiquitous. Outlandish stories continued to make headlines, even on search results and social platforms. The issue became so problematic and widespread that Politifact named fake news its 2016 lie of the year.
While many view fake news as a problem stemming from Facebook and other social media websites, Google is just as responsible for spreading misinformation.
The Outline recently ran a piece that clearly displayed that many of Google’s snippets are nonsensical, misleading, incorrect and sometimes just outright fake.
This isn’t the first time that Google’s search algorithms have landed the company in hot water.
In December 2016, if users searched the phrase, “did the Holocaust happen,” Google would return a false article from a neo-Nazi website entitled, “Top 10 reasons why the Holocaust didn’t happen.”
These kinds of incidents have finally sparked Google to announce changes to its search algorithm that will combat fake news from spreading along with the addition of reporting tools for users to leverage.

How Google is Fighting Fake News

The recent alterations Google has made to search are geared toward improving the quality of the materials that populate the SERPs by surfacing more authoritative sites and demoting low-quality content, as well as combating fake news; a term that Google’s Ben Gomes defines as, “. . . . Content on the web [that] has contributed to the spread of blatantly misleading, low quality, offensive or downright false information.”
This change should effectively prevent incidents like the aforementioned white supremacy article from reaching the top of the SERPs and counter the proliferation of such unfounded claims.
Additionally, Google has implemented a new reporting feature on its autocomplete function to allow users to inform the company of misleading or violent content populating in the search bar.
Similarly, the search engine is also placing a “Feedback” button on the bottom right of all of its snippets so that searchers can report any incorrect, falsely divisive, or other forms of inaccurate data.
Google has also recognized that harmless queries entered into its search bar are serving up “offensive or clearly misleading content” that is not what users were looking for. Google claims that this “small set of queries” only happens approximately 0.25 percent of the time, but that still equates to five billion annual searches when you remember that Google processes roughly two trillion searches per year.
Alongside the new feedback tools, Google has also altered the guidelines by which Search Quality Raters – who are real people, mind you – identify what might be offensive, false, of otherwise harmful materials with increased accuracy and specificity.
The new guidelines put forth by Google focus on helping Search Raters establish the types of materials that include “. . . misleading information, unexpected offensive results, hoaxes, and unsupported conspiracy theories.”
While the feedback from these individuals doesn’t necessarily impact search rankings, it does help Google establish where its algorithm is falling short so that the company can course correct.
As far as its algorithms are concerned, Google is changing some of the signals that influence search rankings by heightening those that demote low-quality content.
Finally, the search provider is also attempting to shed some light on why some rather unsavory content makes its way into the autocomplete section by making its policy (which Google also updated due to recent events) openly available to the public so that everyone can understand the process Google goes through when autocomplete needs to be amended.

Questions of Concern

While reporting or serving up accurate and honest information is objectively a good thing, there are some concerns as to how Google chooses to manage this information and if the company should be in charge of such a task at all.
Some might equate this move to a form of censorship in the search results; something that Google has been accused of in the past. During the 2016 election cycle, the search engine was accused of manipulating autocomplete to remove negative proposals about Hillary Clinton.
Many YouTube influencers feel that they are also being censored by YouTube’s “restricted mode” which has ultimately removed a sizeable amount of LGBT content from the platform. Additionally, many of the site’s top creators are also feeling another form of censorship on YouTube as videos are demonetized for containing strong language, sexually suggestive content, and videos that discuss “controversial or sensitive subjects.”
And with the question of Net neutrality looming in the minds of every Internet-loving American, it can quickly feel as if much of the Internet’s free-flow of information is having a stranglehold placed on it in favor of a controlled flow of data.
And then there’s the issue of determining what “fake news” is. Formerly unfounded claims about government programs like MK Ultra seem like the ravings of a madman shouting his manifesto in the park, but ultimately turned out to be completely true.
Under Google, this type of information would be suppressed and demoted for appearing fake, when in reality it is factual.
All of these factors place Google’s credibility and capability for handling such immensely important deeds into question.
Despite concerns, however, these implementations have already been put into effect. In all likelihood, this move will help to eliminate much of the false information on the Internet from making its way onto the SERPs and assist in creating a more informed society. Society, however, must keep a watchful eye on issues like fake news, Net neutrality, and other means that could potentially restrict the free flow of data.
Do you think that YouTube creators are being censored? What is your opinion on Net neutrality?

No comments:

Post a Comment