Last Saturday, Google presented a paper in the Munich Security Conference titled How Google Fights Disinformation. In the paper, they explain what steps they’re taking against disinformation and detail their strategy for their platforms Google Search, News, YouTube, and Google Ads. We take a look at the key strategies that Google is taking against disinformation.
Disinformation has become widespread in recent years. It directly affects Google’s mission of organizing the world’s information and making it accessible. Disinformation, misinformation, or fake new are deliberate attempts by acting parties to mislead people in believing things that aren’t true by spreading such content over the internet. Disinformation is deliberate attempts to mislead people where the creator knows that the information is false, misinformation is where the creator has their facts wrong and spreads wrong information unintentionally.
The motivations behind it can be financial, political, or just for entertainment (trolls). Motivations can overlap with the content produced, moreover, the disinformation could also be for a good cause, making the fight against fake news very complex.
A common solution for all platforms is not possible as different platforms pose different challenges. Making standards that exercise deep deliberation for individual cases is also not practical.
There are three main principles that Google is outlining to combat disinformation, shown as follows.
#1 Make quality content count
Google products sort through a lot of information to display the most useful content first. They want to deliver quality content and legitimate commercial messages are prone to rumors. While the content is different on different Google platforms, the principles are similar:
- Organizing information by ranking algorithms.
- The algorithms are aimed to ensure that the information benefits users and is measured by user testing
#2 Counter malicious actors
Algorithms cannot determine if a piece of content is true or false based on current events. Neither can it determine the true intents of the content creator. For this, Google products have policies that prohibit certain behaviors like misinterpreting ownership of content. Certain users try to get a better ranking by practicing spam, such behavior is also shown by people who engage in spreading disinformation. Google has algorithms in place that can reduce such content and it’ll also be supported by human reviews for further filtering.
#3 Giving users more choices
Giving users different perspectives is important before they choose a link and proceed reading content or viewing a video. Hence, Google provides multiple links for a topic searched. Google search and other products now have additional UI elements to segregate information into different sections for an organized view of content. They also have a feedback button on their services via which users can submit their thoughts.
Partnership with external experts
Google cannot do this alone, hence they have partnered with supporting new organizations to create quality content that can uproot disinformation. They mention in the paper: “In March 2018, we launched the Google News Initiative (GNI) 3 to help journalism thrive in the digital age.
With a $300 million commitment over 3 years, the initiative aims to elevate and strengthen quality journalism.”
Preparing for the future
People who create fake news will always try new methods to propagate it. Google is investing in research and development against it, now especially before the elections. They intend to stay ahead of the malicious actors who may use new technologies or tactics which can include deepfakes. They want to protect so that polling booths etc are easily available, guard against phishing, mitigate DDoS attacks on political websites.
YouTube and conspiracy theories
Recently, there have been a lot of conspiracy theories floating around on YouTube. In the paper, they say that: “YouTube has been developing products that directly address a core vulnerability involving the spread of disinformation in the immediate aftermath of a breaking news event.” Making a legitimate video with correct facts takes time, while disinformation can be created quickly for spreading panic/negativity etc,.
In conclusion they, note that “fighting disinformation is not a straightforward endeavor. Disinformation and misinformation can take many shapes, manifest differently in different products, and raise significant challenges when it comes to balancing risks of harm to good faith, free expression, with the imperative to serve users with information they can trust.”
People think that only the platforms themselves can take actions against disinformation propaganda.
I think it’s self evident that the platforms are the only actors remotely capable of dealing with these issues (and many more). The question is how they can be held accountable while they attempt to do so
— hal is in san francisco (@halhod) February 18, 2019
Users question Google’s efforts in cases where the legitimate website is shown after the one with disinformation with an example of Bitcoin.
— David DeSantis (@PilotDaveCrypto) February 18, 2019
Some speculate that corporate companies should address their own bias of ranking pages first:
All these papers should start with “how did we adress our own bias”, otherwise its just corporate propoganda not worth the paper
— Paul Jayzilla (@PaulJayzilla) February 19, 2019
What is the difference between disinformation and information google doesn’t like? Scary.
— Darin T (@Darin_T80) February 17, 2019
To read the complete research paper with Google product-specific details on fighting disinformation, you can head on to the Google Blog.