Home Data News Troll Patrol Report: Amnesty International and Element AI use machine learning to...

Troll Patrol Report: Amnesty International and Element AI use machine learning to understand online abuse against women

0
2434
Victim of online violence
4 min read

Amnesty International has partnered with Element AI to release a Troll Patrol report on the online abuse against women on Twitter. This finding was a part of their Troll patrol project which invites human rights researchers, technical experts, and online volunteers to build a crowd-sourced dataset of online abuse against women.  

Abuse of women on social media websites has been rising at an unprecedented rate. Social media websites have a responsibility to respect human rights and to ensure that women using the platform are able to express themselves freely and without fear. However, this has not been the case with Twitter and Amnesty has unearthed certain discoveries.

Amnesty’s methodology was powered by machine learning

Amnesty and Element AI surveyed 778 journalists and politicians from the UK and US throughout 2017 and then use machine learning techniques to qualitatively analyze abuse against women.

  • The first process was to design large, unbiased dataset of tweets mentioning 778 women politicians and journalists from the UK and US.
  • Next, over 6,500 volunteers (aged between 18 to 70 years old and from over 150 countries) analyzed 288,000 unique tweets to create a labeled dataset of abusive or problematic content. This was based on simple questions such as if the tweets were abusive or problematic, and if so, whether they revealed misogynistic, homophobic or racist abuse or other types of violence.
  • Three experts also categorized a sample of 1,000 tweets to assess the quality of the tweets labeled by digital volunteers.
  • Element AI used data science specifically using a subset of the Decoders and experts’ categorization of the tweets, to extrapolate the abuse analysis.

Key findings from the report

Per the findings of the Troll Patrol report,

  • 7.1% of tweets sent to the women in the study were “problematic” or “abusive”. This amounts to 1.1 million tweets mentioning 778 women across the year, or one every 30 seconds.
  • Women of color, (black, Asian, Latinx and mixed-race women) were 34% more likely to be mentioned in abusive or problematic tweets than white women.
  • Black women were disproportionately targeted, being 84% more likely than white women to be mentioned in abusive or problematic tweets.

Source: Amnesty

  • Online abuse targets women from across the political spectrum faced similar levels of online abuse and both liberals and conservatives alike, as well as left and right-leaning media organizations, were targeted.

Source: Amnesty

 

What does this mean for people in tech

Social media organizations are repeatedly failing in their responsibility to protect women’s rights online. They fall short of adequately investigating and responding to reports of violence and abuse in a transparent manner which leads many women to silence or censor themselves on the platform. Such abuses also hinder the freedom of expression online and also undermines women’s mobilization for equality and justice, particularly those groups who already face discrimination and marginalization.

What can tech platforms do?

One of the recommendations of the report is that social media platforms should publicly share comprehensive and meaningful information about reports of violence and abuse against women, as well as other groups, on their platforms. They should also talk in detail about how they are responding to it.

Although Twitter and other platforms are using machine learning for content moderation and flagging, they should be transparent about the algorithms they use. They should publish information about training data, methodologies, moderation policies and technical trade-offs (such as between greater precision or recall) for public scrutiny. Machine learning automation should ideally be part of a larger content moderation system characterized by human judgment, greater transparency, rights of appeal and other safeguards.

Amnesty in collaboration with Element AI also developed a machine learning model to better understand the potential and risks of using machine learning in content moderation systems. This model was able to achieve results comparable to their digital volunteers at predicting abuse, although it is ‘far from perfect still’, Amnesty notes. It achieves about a 50% accuracy level when compared to the judgment of experts. It was able to correctly identify 2 in every 14 tweets as abusive or problematic in comparison to experts who identified 1 in every 14 tweets as abusive or problematic.

Troll Patrol isn’t about policing Twitter or forcing it to remove content. We are asking it to be more transparent, and we hope that the findings from Troll Patrol will compel it to make that change. Crucially, Twitter must start being transparent about how exactly they are using machine learning to detect abuse, and publish technical information about the algorithms they rely on”. said Milena Marin senior advisor for tactical research at Amnesty International.

Read more: The full list of Amnesty’s recommendations to Twitter.
People on Twitter (the irony) are shocked at the release of Amnesty’s report and #ToxicTwitter is trending.

Check out the full Troll Patrol report on Amnesty. Also, check out their machine learning based methodology in detail.

Read Next

Amnesty International takes on Google over Chinese censored search engine, Project Dragonfly.

Twitter CEO, Jack Dorsey slammed by users after a photo of him holding ‘smash Brahminical patriarchy’ poster went viral

Twitter plans to disable the ‘like’ button to promote healthy conversations; should retweet be removed instead?

Content Marketing Editor at Packt Hub. I blog about new and upcoming tech trends ranging from Data science, Web development, Programming, Cloud & Networking, IoT, Security and Game development.