3 min read

After the recent incident of the Christ Church terrorist attack in New Zealand tech companies had scrambled to take action on time due to the speed and volume of content which was uploaded, reuploaded and shared by the users worldwide. Facebook had received severe global pressure to ‘restrict’ the use of Facebook Live considering the shootings were live streamed on its app. Following this pressure, Facebook has now decided to impose restrictions on its live streaming feature. Yesterday in a statement, Facebook declared that from now on they will start restricting users from using Facebook Live if they break certain rules-including their Dangerous Organizations and Individuals policy.

What is the restriction?

Facebook has called this ‘restriction’ as a ‘one strike’ policy to tighten the rules, specifically to Live. If anybody violates any serious policies like violence and criminal behavior, coordinating harm, etc they will be restricted from using Live for a set period of time– for example, 30 days – starting on their first offense. If a user shares a link to a statement from a terrorist group with no context, he/she will be immediately blocked from using Live for a set period of time. These restrictions will eventually be implemented in other areas of Facebook, like creating ads.

The Facebook announcement comes on the eve of a meeting hosted by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron in Paris. This meeting is being conducted to confirm the “Christchurch Call” pledge that will seek participants to eliminate terrorist and violent extremist content on social media and other online platforms. The main aim of this meeting is to bring stricter laws to commit social media firms to keep terrorism and violent extremism off their platforms.

Per a report by Stuff, Ardern has described the crackdown by Facebook on the abuse of its live streaming service as a good first step “that shows the Christchurch Call is being acted on”.

Last month, Australia had introduced hefty fines and even jail time for executives at social media companies who fail to remove violent content quickly. The new legislation could also fine companies up to 10 percent of their annual revenue.

Other steps taken by Facebook

One of the main challenges faced by Facebook after the Christchurch attack was to remove the edited versions of the video of the attack. These type of videos were hard to detect. For this, Facebook is investing $7.5 million in research in partnership with the University of Maryland, Cornell University and the University of California, Berkeley. Their main aim is to research new techniques to :

  • Detect manipulated media across images, video, and audio.
  • Distinguish between unwitting posters and adversaries who intentionally manipulate videos and photographs.

Facebook also hopes to add other research partners to the initiative, which is focused on combating deepfake videos.

To read their full statement, head over to Facebook newsroom website.

Read Next

How social media enabled and amplified the Christchurch terrorist attack

How social media enabled and amplified the Christchurch terrorist attack

Facebook bans six toxic extremist accounts and a conspiracy theory organization

 

A born storyteller turned writer!