5 min read

A group of researchers from the University of Southern California published a paper titled “Combating Fake News: A Survey on Identification and Mitigation Techniques” that discusses existing methods and techniques applicable to identification and mitigation of fake news.

The paper has categorised different existing work on fake news detection and mitigation methods into three types:

  • fake news identification using content-based methods (classifies news based on the content of the information to be verified)
  • identification using feedback-based methods (classifies news based on the user responses it receives over social media)
  • intervention based solutions (offers computational solutions for identifying the spread of false information along with methods to mitigate the impact of fake news)

These existing methods are further categorized as follows:

Categorization of existing methods

“The scope of the methods discussed in content-based and feedback based identification is limited to classifying news from a snapshot of features extracted from social media. For practical applications, we need techniques that can dynamically interpret and update the choice of actions for combating fake news based on real-time content propagation dynamics”, reads the paper.

These techniques that provide such computational methods and algorithms are discussed extensively in the paper. Let’s have a look at some of these strategies.

Mitigation strategies: decontamination, competing cascades, and multi-stage intervention

The paper presents three different mitigation strategies aimed at reversing the effect of fake news by introducing true news on social media platforms. This ensures that users are exposed to the truth and the impact of fake news on user opinions are mitigated.

The Computational methods designed for this purpose first needs to consider information diffusion models widely-used in social networks such as the Independent Cascade (IC), linear Threshold (LT) model, as well as the point process models such as Hawkes Process model.

Decontamination

The paper mentions the strategy introduced by Nam P Nguyen, in his paper “Containment of misinformation spread in online social networks”. The strategy includes decontaminating the users exposed to fake news. It makes use of the diffusion process (estimates the spread of information over the population) modelled with the help of Independent Cascade (IC) or Linear Threshold model (LT).

A simple greedy algorithm is then designed that selects the best set of users. Then starts the diffusion process for true news so that at least a fraction of the selected users can be decontaminated. The algorithm iteratively selects the next best user to include into the set depending on the marginal gains obtained by the inclusion of that user (i.e. the number of users activated or reached by the true news in expectation, if the set did additionally include the chosen user).

Competing cascades

The paper mentions an intervention strategy based on competing cascades. The method of competing cascades involves introducing a true news cascade to make it compete with the fake news cascade, while the fake news is propagating through the network.

The paper discusses an “influence blocking maximization objective” by Xinran He, as an optimal strategy to spread true news in the presence of fake news cascade. The process selects a set of “k” users strategically with the objective of minimizing the number of users who get activated by fake news at the end of the diffusion. According to the paper, this model assumes that once a user gets activated by either the fake or true cascade, that user will remain activated under that cascade.

Multi-stage intervention

Another strategy discussed in the paper is the “multi-stage intervention strategy” proposed by Mehrdad Farajtabar, in the paper “Fake News Mitigation via Point Process Based Intervention”.  This strategy allows “external interventions to adapt as necessary to the observed propagation dynamics of fake news”, states the paper. The purpose of the external interventions in the process is to incentivize certain users to enable increased sharing of true news that can counteract the fake news process over the network.

At each step of the intervention, there are certain budget and user activity constraints that are imposed. This allows you to track the optimal amount of external incentivization, needed to achieve the desired objective i.e. minimizing the difference between fake and true news exposures. This strategy makes use of a reinforcement learning based policy iteration framework that helps derive the optimal amount of external incentivization.

Identification strategies: Network Monitoring and crowd-sourcing

The paper discusses different identification mechanisms that help actively detect and prevent the spread of misinformation due to fake news on social media platforms.

Network Monitoring

The paper presents a strategy based on network monitoring that involves intercepting information from a list of suspected fake news sources using computer-aided social media accounts or real paid user accounts. These accounts help filter out the information they receive and block fake news.

The strategy used a “network monitor placement” that is determined by finding a part of the network with the highest probability of fakes news transmission. Another network monitoring placement solution involves a Stackelberg game between leader (attacker) and follower( defender) nodes. The paper also mentions an idea implemented by various network monitoring sites. This includes having multiple human or machine classifiers to improve the detection robustness as something that might get missed by one fact-checker might get captured by another.

Crowd-sourcing

Another identification strategy mentioned in the paper makes use of the crowd-sourced user feedback on social media platforms that helps users report or flag fake news articles. These crowd-sourced signals used to prioritize the fact-checking of news articles involves capturing “the trade-off between a collection of evidence v/s the harm caused from more users being exposed to fake news (exposures) to determine when the news needs to be verified”, states the paper.

The fact-checking process and events are represented using point process models. This process helps to derive the optimal intensity of fact-checking that is proportional to the rate of exposure to misinformation and collected evidence as flags. The paper mentions an online learning algorithm to more accurately leverage user flags. This algorithm jointly infers the flagging accuracies of users while also identifying fake news.

“The literature surveyed here has demonstrated significant advances in addressing the identification and mitigation of fake news. Nevertheless, there remain many challenges to overcome in practice,” states the researchers.

For more information, check out the official research paper.

Read Next

Fake news is a danger to democracy. These researchers are using deep learning to model fake news to understand its impact on the elections

Four 2018 Facebook patents to battle fake news and improve news feed

Facebook patents its news feed filter tool to provide more relevant news to its users

Tech writer at the Packt Hub. Dreamer, book nerd, lover of scented candles, karaoke, and Gilmore Girls.