Amidst the allegations surrounding Facebook on fake news, Facebook is now reportedly working on a scale to rate user trustworthiness. According to a report by Washington Post, the company is giving its users a trustworthiness score ranging from 0 to 1 depending on the reliability of their false news flagging.
This is another of Facebook’s attempt to revamp its image after it got unfriended by Wall Street, complained on by HUD, and accused of discriminatory advertising. Previously, Facebook has made several patents to battle fake news and improve news feed, including patenting their news feed filter tool, most recently.
How does the fake news scoring system work?
If a user flags something as false news but fact checkers verify it as true, it could hurt their score and reduce future Facebook flagging. If users consistently report false news that’s indeed proven to be false, their score improves and Facebook will trust their future flagging more. The user-reported fakes are arranged on the basis of user trustworthiness to help make the best use of fact-checker time. The score is used to help the fact-checking team determine which posts to look at first.
The idea behind this scoring is to eliminate people who have the habit of making false claims about news articles. This will also help thwart certain users who band together to flag a piece of content from a news publisher they disagree with. Facebook says, “We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system. The reason we do this is to make sure that our fight against misinformation is as effective as possible.”
Facebook’s News Feed product manager Tessa Lyons confirmed the scoring system exists and that it was developed sometime over the past year, Lyons said, “There’s currently no way to see your own or someone else’s trustworthiness score. And other signals are also used to compute the score.” Facebook is keeping shut about how the score is generated to prevent bad actors from unethically boosting their trustworthiness score.
While it is good to distinguish genuine flagging from the rest to allow news moderators to focus on fact-checking better, what is still missing is an effective mechanism to minimize the reach of fake news in the early hours of post. This makes us wonder if Facebook or some other social media sites could be considering rating users based on their propensity for sharing/propagating fake news via shares and likes.
The entire interview is available on Washington Post.