8 min read

Of late, the world has been shaken over the rising number of data related scandals and attacks that have overshadowed social media platforms.

This shakedown was experienced in Wall Street last week when tech stocks came crashing down after Facebook’s Q2 earnings call on 25th July and then further down after Twitter’s earnings call on 27th July. Social media regulation is now at the heart of discussions across the tech sector.

The social butterfly effect is real

2018 began with the Cambridge Analytica scandal where the data analytics company was alleged to have not only been influencing the outcome of UK and US Presidential elections but also of harvesting copious amounts of data from Facebook (illegally).  Then Facebook fell down the rabbit hole with Muller’s indictment report that highlighted the role social media played in election interference in 2016. ‘Fake news’ on Whatsapp triggered mob violence in India while Twitter has been plagued with fake accounts and tweets that never seem to go away.

Fake news and friends crash the tech stock party

Last week, social media stocks fell in double digits (Facebook by 20% and Twitter by 21%) bringing down the entire tech sector; a fall that continues to keep tech stocks in a bearish market and haunt tech shareholders even today. Wall Street has been a nervous wreck this week hoping for the bad news to stop spirally downwards with good news from Apple to undo last week’s nightmare.

Amidst these reports, lawmakers, regulators and organizations alike are facing greater pressure for regulation of social media platforms.

How are lawmakers proposing to regulate social media?

Even though lawmakers have started paying increased attention to social networks over the past year, there has been little progress made in terms of how much they actually understand them. This could soon change as Axios’ David McCabe published a policy paper from the office of Senator Mark Warner. This paper describes a comprehensive regulatory policy covering almost every aspect of social networks.

The paper-proposal is designed to address three broad categories: combating misinformation, privacy and data protection, and promoting competition in tech space.

Misinformation, disinformation, and the exploitation of technology covers ideas such as:

  • Networks are to label automated bots.
  • Platforms are to verify identities,
  • Platforms are to make regular disclosures about how many fake accounts they’ve deleted.
  • Platforms are to create APIs for academic research.

Privacy and data protection include policies such as:

  • Create a US version of the GDPR.
  • Designate platforms as information fiduciaries with the legal responsibility of protecting user’s data.
  • Empowering the Federal Trade Commission to make rules around data privacy.
  • Create a legislative ban on dark patterns that trick users into accepting terms and conditions without reading them.
  • Allow the government to audit corporate algorithms.

Promoting competition in tech space that requires:

  • Tech companies to continuously disclose to consumers how their data is being used.
  • Social network data to be made portable.
  • Social networks to be interoperable.
  • Designate certain products as essential facilities and demand that third parties get fair access to them.

Although these proposals and more of them (British parliamentary committee recommended imposing much stricter guidelines on social networks) remain far from becoming the law, they are an assurance that legal firms and lawmakers are serious about taking steps to ensure that social media platforms don’t go out of hand.

Taking measures to ensure data regulations by lawmakers and legal authorities is only effective if the platforms themselves care about the issues themselves and are motivated to behave in the right way. Losing a significant chunk of their user base in EU lately seems to have provided that very incentive. Social network platforms, themselves have now started seeking ways to protecting user data and improve their platforms in general to alleviate some of the problems they helped create or amplify.

How is Facebook planning to course correct it’s social media Frankenstein?

Last week, Mark Zuckerberg started the fated earnings call by saying, “I want to start by talking about all the investments we’ve made over the last six months to improve safety, security, and privacy across our services. This has been a lot of hard work, and it’s starting to pay off.”

He then goes on to elaborate key areas of focus for Facebook in the coming months, the next 1.5 years to be more specific.

  • Ad transparency tools: All ads can be viewed by anyone, even if they are not targeted at them. Facebook is also developing an archive of ads with political or issue content which will be labeled to show who paid for them, what the budget was and how many people viewed the ads, and will also allow one to search ads by an advertiser for the past 7 years.
  • Disallow and report known election interference attempts: Facebook will proactively look for and eliminate fake accounts, pages, and groups that violated their policies. This could minimize election interference, says Zuckerberg.
  • Fight against misinformation: Remove the financial incentives for spammers to create fake news.  Stop pages that repeatedly spread false information from buying ads.
  • Shift from reactive to proactive detection with AI: Use AI to prevent fake accounts that generate a lot of the problematic content from ever being created in the first place.  They can now remove more bad content quickly because we don’t have to wait until after it’s reported. In Q1, for example, almost 90% of graphic violence content that Facebook removed or added a warning label to was identified using AI.
  • Invest heavily in security and privacy. No further elaboration on this aspect was given on the call.

This week, Facebook reported that they’d  detected and removed 32 pages and fake accounts that had engaged in a coordinated inauthentic behavior. These accounts and pages were of a political influence campaign that was potentially built to disrupt the midterm elections. According to Facebook’s Head of Cybersecurity, Nathaniel Gleicher, “So far, the activity encompasses eight Facebook Pages, 17 profiles and seven accounts on Instagram.

Facebook’s action is a change from last year when it was widely criticized for failing to detect Russian interference in the 2016 presidential election. Although the current campaign hasn’t been linked to Russia (yet), Facebook officials pointed out that some of the tools and techniques used by the accounts were similar to those used by the Russian government-linked Internet Research Agency.

How Twitter plans to make its platform a better place for real and civilized conversation

“We want people to feel safe freely expressing themselves and have launched new tools to address problem behaviors that distort and distract from the public conversation. We’re also continuing to make it easier for people to find and follow breaking news and events…” said  Jack Dorsey, Twitter’s CEO, at Q2 2018 Earnings call.

The letter to Twitter shareholders further elaborates on this point:

We continue to invest in improving the health of the public conversation on Twitter, making the service better by integrating new behavioral signals to remove spammy and suspicious accounts and continuing to prioritize the long-term health of the platform over near-term metrics. We also acquired Smyte, a company that specializes in spam prevention, safety, and security.

 

Unlike Facebook’s explanatory anecdotal support for the claims made, Twitter provided quantitative evidence to show the seriousness of their endeavor. Here are some key metrics from the shareholders’ letter this quarter.

  • Results from early experiments on using new tools to address behaviors that distort and distract from the public conversation show a 4% drop in abuse reports from search and 8% fewer abuse reports from conversations
  • More than 9 million potentially spammy or automated accounts identified and challenged per week
  • 8k fewer average spam reports per day
  • Removing more than 2x the number of accounts for violating Twitter’s spam policies than they did last year

It is clear that Twitter has been quite active when it comes to looking for ways to eliminate toxicity from the website’s network.

CEO Jack Dorsey in a series of tweets stated that the company did not always meet users’ expectations. “We aren’t proud of how people have taken advantage of our service, or our inability to address it fast enough, with the company needing a “systemic framework.

Back in March 2018, Twitter invited external experts,  to measure the health of the company in order to encourage a more healthy conversation, debate, and critical thinking. Twitter asked them to create proposals taking inspiration from the concept of measuring conversation health defined by a non-profit firm Cortico. As of yesterday, they now have their dream team of researchers finalized and ready to take up the challenge of identifying echo chambers on Twitter for unhealthy behavior and then translating their findings into practical algorithms down the line.

[dropcap]W[/dropcap]ith social media here to stay, both lawmakers and social media platforms are looking for new ways to regulate. Any misstep by these social media sites will have solid repercussions which include not only closer scrutiny by the government and private watchdogs but also losing out on stock value, a bad reputation, as well as being linked to other forms of data misuse and accusations of political bias.

Lastly, let’s not forget the responsibility that lies with the ‘social’ side of these platforms. Individuals need to play their part in being proactive in reporting fake news and stories, and they also need to be more selective about the content they share on social.

Read Next

Why Wall Street unfriended Facebook: Stocks fell $120 billion in market value after Q2 2018 earnings call

Facebook must stop discriminatory advertising in the US, declares Washington AG, Ferguson

Facebook is investigating data analytics firm Crimson Hexagon over misuse of data

Content Marketing Editor at Packt Hub. I blog about new and upcoming tech trends ranging from Data science, Web development, Programming, Cloud & Networking, IoT, Security and Game development.

LEAVE A REPLY

Please enter your comment!
Please enter your name here