6 min read

Earlier this month, the AI Now Institute published a report, authored by Sarah Myers West, Meredith Whittaker, and Kate Crawford, highlighting the link between the diversity issue in the current AI industry and the discriminating behavior of AI systems. The report further recommends some solutions to these problems that companies and the researchers behind these systems need to adopt to address these issues.

Sarah Myers West is a postdoc researcher at the AI Now Institute and an affiliate researcher at the Berkman-Klein Center for Internet and Society. Meredith Whittaker is the co-founder of the AI Now Institute and leads Google’s Open Research Group and the Google Measurement Lab.  Kate Crawford is a Principal Researcher at Microsoft Research and the co-founder and Director of Research at the AI Now Institute.

Kate Crawford tweeted about this study.

The AI industry lacks diversity, gender neutrality, and bias-free systems

In recent years, we have come across several cases of “discriminating systems”. Facial recognition systems miscategorize black people and sometimes fails to work for trans drivers. When trained in online discourse, chatbots easily learn racist and misogynistic language. This type of behavior by machines is actually a reflection of society. “In most cases, such bias mirrors and replicates existing structures of inequality in the society,” says the report.

The study also sheds light on gender bias in the current workforce. According to the report, only 18% of authors at some of the biggest AI conferences are women. On the other side of the spectrum are men who cover 80%. The tech giants, Facebook and Google, have a meager 15% and 10% women as their AI research staff.

The situation for black workers in the AI industry looks even worse. While Facebook and Microsoft have 4% of their current workforce as black workers, Google stands at just 2.5%. Also, vast majority of AI studies assume gender is binary, and commonly assigns people as ‘male’ or ‘female’ based on physical appearance and stereotypical assumptions, erasing all other forms of gender identity.

The report further reveals that, though there have been various “pipeline studies” to check the flow of diverse job candidates, they have failed to show substantial progress in bringing diversity in the AI industry. “The focus on the pipeline has not addressed deeper issues with workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization that are causing people to leave or avoid working in the AI sector altogether,” the report reads.

What steps can industries take to address bias and discrimination in AI Systems

The report lists 12 recommendations that AI researchers and companies should employ to improve workplace diversity and address bias and discrimination in AI systems.

  1. Publish compensation levels, including bonuses and equity, across all roles and job categories, broken down by race and gender.
  2. End pay and opportunity inequality, and set pay and benefit equity goals that include contract workers, temps, and vendors.
  3. Publish harassment and discrimination transparency reports, including the number of claims over time, the types of claims submitted, and actions taken.
  4. Change hiring practices to maximize diversity: include targeted recruitment beyond elite universities, ensure more equitable focus on under-represented groups, and create more pathways for contractors, temps, and vendors to become full-time employees.
  5. Commit to transparency around hiring practices, especially regarding how candidates are leveled, compensated, and promoted.
  6. Increase the number of people of color, women and other under-represented groups at senior leadership levels of AI companies across all departments.
  7. Ensure executive incentive structures are tied to increases in hiring and retention of underrepresented groups.
  8. For academic workplaces, ensure greater diversity in all spaces where AI research is conducted, including AI-related departments and conference committees.
  9. Remedying bias in AI systems is almost impossible when these systems are opaque. Transparency is essential, and begins with tracking and publicizing where AI systems are used, and for what purpose.
  10. Rigorous testing should be required across the lifecycle of AI systems in sensitive domains. Pre-release trials, independent auditing, and ongoing monitoring are necessary to test for bias, discrimination, and other harms.
  11. The field of research on bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context. This necessitates including a wider range of disciplinary expertise.
  12. The methods for addressing bias and discrimination in AI need to expand to include assessments of whether certain systems should be designed at all, based on a thorough risk assessment. AI-related departments and conference committees.

Credits: AI Now Institute

Bringing diversity in the AI workforce

In order to address the diversity issue in the AI industry, companies need to make changes in the current hiring practices. They should have a more equitable focus on under-represented groups. People of color, women, and other under-represented groups should get fair chance to get into senior leadership level of AI companies across all departments. Further opportunities should be created for contractors, temps, and vendors to become full-time employees. To bridge the gender pay gap in the AI industry, it is important that companies maintain transparency regarding the compensation levels, including bonuses and equity, regardless of gender or race.

In the past few years, several cases of sexual misconducts involving some of the biggest companies like Google, Microsoft, have come into light because of movements like #MeToo, Google Walkout, and more. These movements gave the victims and other supporting employees  the courage to speak against employees at higher positions who were taking undue advantage of their power. There are cases were the sexual harassment complaints were not taken seriously by the HRs and victims were told to just “get over it”. This is why, companies should  publish harassment and discrimination transparency reports that include information like the number and types of claims made and the actions taken by the company.

Academic workplaces should ensure diversity in all AI-related departments and conference committees. In the past, some of the biggest AI conferences like Neural Information Processing Systems conference has failed to provide a welcoming and safer environment for women. In a survey conducted last year, many respondents shared that they have experienced sexual harassment. Women reported persistent advances from men at the conference. The organizers of such conferences should ensure an inclusive and welcoming environment for everyone.

Addressing bias and discrimination in AI systems

To address bias and discrimination in AI systems, the report recommends to do rigorous testing across the lifecycle of these systems. These systems should have pre-release trials, independent auditing, and monitoring to check bias, discrimination, and other harms. Looking at the social implications of AI systems, just addressing the algorithmic bias is not enough. “The field of research on bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context. This necessitates including a wider range of disciplinary expertise,” says the report.

While assessing a AI system, researchers and developers should also check whether designing a certain system is required at all, considering the risks it poses. The study calls for re-evaluating the current AI systems used for classifying, detecting, and predicting the race and gender. The idea of identifying a race or gender just by appearance is flawed and can be easily abused. Especially, systems that use physical appearance to find interior states, for instance, those that claim to detect sexuality from headshots. These systems are urgently in need to be checked.

To know more in detail, read the full report: Discriminating Systems.

Read Next

Microsoft’s #MeToo reckoning: female employees speak out against workplace harassment and discrimination

Desmond U. Patton, Director of SAFElab shares why AI systems should be a product of interdisciplinary research and diverse teams

Google’s Chief Diversity Officer, Danielle Brown resigns to join HR tech firm Gusto