11 min read

The DCMS meeting for fake news inquiry on Facebook’s platform was held yesterday at the House of Commons, UK. This was the first time that parliamentarians from such number (nine) of countries gathered in one place. Representatives were from Argentina, Canada, France, Singapore, Ireland, Belgium, Brazil, Latvia, and a select few from the DCMS committee. Richard Allan is also a member of House of Lords, as Lord Allan of Hallam in addition to being Facebook VP for policy solutions. The committee was chaired by Damian Collins MP, head of UK parliament’s digital, culture, media and sport (DCMS) select committee.

About Zuckerberg not attending the meeting himself

Facebook had refused to send Zuckerberg to the hearing despite repeated requests from the DCMS committee and even after being flexible about remotely attending the meeting via FaceTime. The parliamentarians were clearly displeased with Mark Zuckerberg’s empty chair at the meeting. They made remarks about how he should be accountable as a CEO for a meeting that involves his company and representatives representing millions of Facebook users from different countries.

There were plenty of remarks directed at the Facebook founder being absent in the hearing. Statement from Mark Zuckerberg to the US Senate hearing earlier this year: “We didn’t take a broad enough view of our responsibility, it was my mistake and I am sorry”. Allan was asked if he thought that was a genuine statement, he said yes. Then Nathaniel Erskine-Smith from Canada made a remark “Just not sorry enough to appear himself before nine parliaments.

Canada wasn’t done, another remark from Erksine-Smith: “Sense of corporate social responsibility, particularly in light of the immense power and profit of Facebook, has been empty as the chair beside you.” In Canada, only 270 people had used the app called Your Digital Life related to Cambridge Analytica and 620,000 had their information shared with the developer.


Who gave Mr. Zuckerberg the advice to ignore this committee?

Charles Angus Vice-Chair, from House of Commons, Canada made a remark that Zuckerberg decided to “blow off this meeting”. Richard Allan accepted full responsibility for decisions on public appearances for Facebook.

How does it looks that Zuckerberg is not here and you’re apologizing for his absence?

“Not great” was his answer.

Don’t you see that Facebook has lost public trust due to misinformation tactics?

Allan agreed to this point. Charles Angus said Facebook has lost the trust of the international committee that it can police itself. Damian Collins said, “It should be up to the parliaments to decide what regulatory measures need to be set in place and not Facebook.

Were you sent because you could answer our questions or to defend Facebook’s position?

Allan said that he was sent to answer questions and was in the company since 2009 and had experienced events first hand. He said that he volunteered to come, Mike Schroepfer, Facebook CTO was sent to an earlier hearing, but the committee was not happy with his answers.

The Cambridge Analytica incident

Questions were asked about when Facebook became aware of this incident. Allan said that it was when the story was out in the press.

When did Mark Zuckerberg know about the GSR Cambridge Analytica incident?

After some evasion, the answer was March 2018, as the timeline, when it was covered by the press. The same question was asked 6 months ago to Mike Schroepfer and he said he didn’t know. A follow up was if Facebook was aware of and banned any other apps that breached privacy. Allan said that there were many but on probing could not name even one. He promised to send the committee a written response to that question. After the US senate hearing in April, Zuckerberg was supposed to give a list of such apps that were banned, the committee still hasn’t got any such list.

Ian Lucas MP (Wrexham, Labour) said: “You knew app developers were sharing information and the only time you took actions was when you were found out.

What were Facebook’s decision on its decisions on data and privacy controls that led to the Cambridge Analytica scandal?

Allan explained that there are two versions of the way developers had access to user data:

  1. Before the 2015 policy changes, access to friends data was allowed
  2. After the changes, this access was removed

Non user data is sitting on Facebook servers but they do not use it to create shadow profiles. Additionally, any third party apps are expected to have their own privacy policy which can be different from Facebook’s own privacy policy. Allan said that if any such app is found that has privacy measures that may lead to privacy issues, then they take actions but could not provide an example of having done so.

Will Facebook apply GDPR standards across all countries as Zuckerberg stated?

They believe that the tools, system that they built are GDPR complaint.

Russian activity on Facebook

From the recently seized documents, questions were asked but Allan deflected them by saying they are unverified and partial information.

Why didn’t Facebook disclose that it knew Russian ads were run on its platform?

The case made for the question was that no one from Facebook disclosed that information about Russian activity on its platform. It wasn’t disclosed until US Senate intelligence committee made a formal request. Allan said that their current policy at the moment is to investigate and publish any such activity.

From the cache of documents obtained, a point was made about an email by a Facebook engineer in 2014 about Russian IP addresses using a Pinterest API key to pull over 3 billion data points through the ordered friends API. Allan said that the details from those seized mails/documents are unverified, partial and can be misleading. Allan stuck to his guns saying: “we will come back to you”.

Facebook’s privacy controls

Facebook user settings were overridden by a checkbox not in plain sight and that allowed your friends’ apps to access your data.

When did Facebook change the API that overrode its own central privacy page?

In November 2009, Facebook central privacy page that allowed you to “control who can see your profile and personal information”. In November 2011 the US federal trade commission made a complaint against Facebook that they allowed external app developers to access personal information. Allan responded by saying that in privacy settings, there was a checkbox to disallow access to your data by applications installed by friends.

What about non-Facebook user data?

They use it to link connections when that person becomes a Facebook user. They do not make any money out of it.

What’s your beef with Six4Three?

Their app, pikini depended on friends data. When Facebook changed the API to version 2 as mentioned above, Six4Three sued Facebook as their app won’t work anymore.

Discussions on can Facebook be held accountable for its actions

Allan agrees that there should be a global regulation to hold the company accountable for its actions.

Are you serious about regulation on a global level?

There are now tens of thousands of Facebook employees working towards securing user privacy, they were too few before. Allan agrees a global level regulation should be present, the company should be accountable for its actions and sanctions should be made against any undesirable actions by the company. Maybe this will be communicated from a global organization like the United Nations (UN).

How is Facebook policing fake accounts and their networks?

It is an ongoing battle. Most of fake account creation is not with political intent but with commercial intent to sell followers pushing spam etc. More clicks = more money for them. Many of these accounts are taken out within minutes of creation. “We have artificial intelligence systems that try and understand what a fake account looks like and shut them down as soon as they come up”. This is for mass creation of accounts.

In political cases only or two accounts are created and act as genuine users. Facebook is still removing fake accounts related to Russia. Allan says they’re trying to get better all the time. Low-quality information has decreased by 50% on Facebook, as per research by academia. Fake users that use VPN are difficult to address.

For running political ads, an account needs to be a regularly used account, driving license or passport is needed and also the payment information is stored with Facebook in addition to any information that Facebook may have. Allan says, in this case, it would be unwise since the information can be used to prosecute the fake account user even if the documents used were fake.

In the context of fake ads or information Allan agreed that the judicial authority of the specific country is the best entrusted with taking down sensitive information. He gave an example—if someone claims that a politician is corrupt and he is not, taking it down is correct but if he is corrupt and it is taken down then genuine information is lost.

A case of non-regulation was pointed out

A hate speech Facebook comment in Sri Lanka was pointed out by Edwin Tong of Singapore. The comment was in Sinhalese and Facebook did not remove it even after reports of it being hate speech. Allan said that it was a mistake and they are heavily investing in artificial intelligence with a set of hate speech keywords that can weed out such comments. They are working through the different languages on this.

How will Facebook provide transparency on the use of measures taken against fake news?

There is a big push around academic study in this area. They are working with academics in this area to understand the fake news problem better. But they also want to ensure that Facebook doesn’t end up sharing user data that people would find inappropriate.

How is Facebook monitoring new sign-ups and posts during elections?

There should not be anonymous users. The next time they log in, there is a checkpoint that says more information is required.

Would Facebook consider working actively with local election authorities to remove or flag posts that would influence voter choice?

They think that this is essential. It is the judiciary system that can best decide if such posts are true or false and make a call to remove them. To make everyone feel that the election was free and fair is something Facebook can’t do on their own.

What is Facebook doing to prevent misuse of its algorithms to influence elections?

Change in algorithms the way it searches information generally. They now better classify low-quality content. Secondly, there is a category of borderline content which is not banned by close to being banned. But it is getting rewarded by the algorithm, work is being done to reduce it instead. Third party opinion as fact checkers for marking posts as false or true. This is about tilting the scales to higher quality less sensational content from lower quality more sensational content in the algorithm.

What measures for fake news in WhatsApp?

There are services that provide fake WhatsApp numbers. Report it to the company and they will take them down, says Allan. They are aware of this and its use and it needs to be a part of the election protection effort.

Closing discussion

After the lengthy round of grilling of the fake news inquiry, Angus reiterated that they expect Facebook to be accountable for its actions.

Would you be interested in asking your friend Mr Zuckerberg if we should have a discussion about anti-trust?

You and Mr. Zuckerberg are the symptoms. Perhaps the best regulation is anti trust, to break Facebook up from WhatsApp and Instagram, allow competition.

Allan answers that it depends on the problem to solve. Angus jolted: “The problem is Facebook” which we need to address. It’s unprecedented economic control of every form of social discourse and communication.

Angus asks Facebook to have corporate accountability. Perhaps in its unwillingness to be accountable to the international body, maybe anti-trust would be something to help get credible democratic responses from a corporation.

These were of the highlights of the questions and answers at the committee meeting held on 27th November 2018, the House of Commons. We recommend you watch the complete proceeding for a more comprehensive context here.

In our view, Mr Allan tried answering many of the questions during the three hour session of this fake news inquiry better than Sandberg or Zuckerberg did in their hearings, but the answers were less than satisfactory where important topics were involved regarding Facebook’s data and privacy controls. It does appear that Facebook will continue to delay, deny and deflect as much as it can.

Read next

Privacy experts urge the Senate Commerce Committee for a strong federal privacy bill “that sets a floor, not a ceiling”

Consumer protection organizations submit a new data protection framework to the Senate Commerce Committee

Facebook, Twitter open up at Senate Intelligence hearing, committee does ‘homework’ this time