7 min read

Last Friday’s uncontrolled spread of horrific videos on the Christchurch mosque attack and a propaganda coup for espousing hateful ideologies raised questions about social media. The tech companies scrambled to take action on time due to the speed and volume of content which was uploaded, reuploaded and shared by the users worldwide.

In Washington and Silicon Valley, the incident crystallized growing concerns about the extent to which government and market forces have failed to check the power of social media.

The failure highlighted the social media companies struggle to police content that are massively lucrative and persistently vulnerable to outside manipulation despite years of promises to do better.

After the white supremacist live-streamed the attack and uploaded the video to Facebook, Twitter, YouTube, and other platforms across the internet. These tech companies faced back lashes from the media and internet users worldwide, to an extent where they were regarded as complicit in promoting white supremacism too. In response to the backlash, Google and Facebook provides status report on what they went through when the video was reported, the kind of challenges they faced and what are the next steps to combat such incidents in future.

Google’s report so far…

Google in an email to Motherboard says it employs 10,000 people across to moderate the company’s platforms and products. They also described a process they would follow when a user reports a piece of potentially violating content—such as the attack video; which is

  1. The user flagged report will go to a human moderator to assess.
  2. The moderator is instructed to flag all pieces of content related to the attack as “Terrorist Content,” including full-length or sections of the manifesto. Because of the document’s length the email tells moderators not to spend an extensive amount of time trying to confirm whether a piece of content does contain part of the manifesto.
  3. Instead, if the moderator is unsure, they should err on the side of caution and still label the content as “Terrorist Content,” which will then be reviewed by a second moderator.
  4. The second moderator is told to take time to verify that it is a piece of the manifesto, and appropriately mark the content as terrorism no matter how long or short the section may be.
  5. Moderators are told to mark the manifesto or video as terrorism content unless there is an Educational, Documentary, Scientific, or Artistic (EDSA) context to it.

Further Google adds that they want to preserve journalistic or educational coverage of the event, but does not want to allow the video or manifesto itself to spread throughout the company’s services without additional context.

Google at some point had taken the unusual step of automatically rejecting any footage of violence from the attack video, cutting out the process of a human determining the context of the clip. If, say, a news organization was impacted by this change, the outlet could appeal the decision, Google commented.

“We made the call to basically err on the side of machine intelligence, as opposed to waiting for human review,” YouTube’s Product Officer Neal Mohan told the Washington Post in an article published Monday.

Google also tweaked the search function to show results from authoritative news sources. It suspended the ability to search for clips by upload date, making it harder for people to find copies of the attack footage.

“Since Friday’s horrific tragedy, we’ve removed tens of thousands of videos and terminated hundreds of accounts created to promote or glorify the shooter,” a YouTube spokesperson said.

“Our teams are continuing to work around the clock to prevent violent and graphic content from spreading, we know there is much more work to do,” the statement added.

Facebook’s update so far…

Facebook on Wednesday also shared an update on how they have been working with the New Zealand Police to support their investigation. It provided additional information on how their products were used to circulate videos and how they plan to improve them.

So far Facebook has provided the following information:

  • The video was viewed fewer than 200 times during the live broadcast.
  • No users reported the video during the live broadcast.
  • Including the views during the live broadcast, the video was viewed about 4,000 times in total before being removed from Facebook.
  • Before Facebook was alerted to the video, a user on 8chan posted a link to a copy of the video on a file-sharing site.
  • The first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended.
  • In the first 24 hours, Facebook removed more than 1.2 million videos of the attack at upload, which were therefore prevented from being seen on our services. Approximately 300,000 additional copies were removed after they were posted.

As there were questions asked to Facebook about why artificial intelligence (AI) didn’t detect the video automatically. Facebook says AI has made massive progress over the years to proactively detect the vast majority of the content it can remove. But it’s not perfect.

“To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare.” says Guy Rosen VP Product Management at Facebook.

Guy further adds, “AI is an incredibly important part of our fight against terrorist content on our platforms, and while its effectiveness continues to improve, it is never going to be perfect. People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us. That’s why last year Facebook more than doubled the number of people working on safety and security to over 30,000 people, including about 15,000 content reviewers to report content that they find disturbing.”

Facebook further plans to:

  1. Improve the image and video matching technology so that they can stop the spread of viral videos of such nature, regardless of how they were originally produced.
  2. React faster to this kind of content on a live streamed video.
  3. Continue to combat hate speech of all kinds on their platform.
  4. Expand industry collaboration through the Global Internet Forum to Counter Terrorism (GIFCT).

Challenges Google and Facebook faced to report the video content

According to Motherboard, Google saw an unprecedented number of attempts to post footage from the attack, sometimes as fast as a piece of content per second. But the challenge they faced was to block access to the killer’s so-called manifesto, a 74-page document that spouted racist views and explicit calls for violence.

Google described the difficulties of moderating the manifesto, pointing to its length and the issue of users sharing the snippets of the manifesto that Google’s content moderators may not immediately recognise.

“The manifesto will be particularly challenging to enforce against given the length of the document and that you may see various segments of various lengths within the content you are reviewing,” says Google.

A source with knowledge of Google’s strategy for moderating the New Zealand attack material said this can complicate moderation efforts because some outlets did use parts of the video and manifesto. UK newspaper The Daily Mail let readers download the terrorist’s manifesto directly from the paper’s own website, and Sky News Australia aired parts of the attack footage, BuzzFeed News reported.

On the other hand Facebook faces a challenge to automatically discern such content from visually similar, innocuous content. For example if thousands of videos from live-streamed video games are flagged by the systems, reviewers could miss the important real-world videos where they could alert first responders to get help on the ground.

Another challenge for Facebook is similar to what Google faces, which is the proliferation of many different variants of videos makes it difficult for the image and video matching technology to prevent spreading further.

Facebook found that a core community of bad actors working together to continually re-upload edited versions of the video in ways designed to defeat their detection. Second, a broader set of people distributed the video and unintentionally made it harder to match copies. Websites and pages, eager to get attention from people seeking out the video, re-cut and re-recorded the video into various formats.

In total, Facebook found and blocked over 800 visually-distinct variants of the video that were circulating.

Both companies seem to be working hard to improve their products and gain user’s trust and confidence back.

Read Next

How social media enabled and amplified the Christchurch terrorist attack

Google to be the founding member of CDF (Continuous Delivery Foundation)

Google announces the stable release of Android Jetpack Navigation

Being a Senior Content Marketing Editor at Packt Publishing, I handle vast array of content in the tech space ranging from Data science, Web development, Programming, Cloud & Networking, IoT, Security and Game development. With prior experience and understanding of Marketing I aspire to grow leaps and bounds in the Content & Digital Marketing field. On the personal front I am an ambivert and love to read inspiring articles and books on life and in general.