4 min read

The debate about AI systems being non-inclusive, sexist, and racist has been going on for a very long time. Though most of the time the blame is put on the training data, one of the key reasons behind this behavior is not having a diverse team.

Last week, Stanford University launched a new institute named the Stanford Institute of Human-Centered Artificial Intelligence (HAI). This institute is aimed at researching and developing human-centered applications and technologies through multidisciplinary collaboration to get “true diversity of thought”.

While the institute talked about diversity, its list of faculty members failed to reflect that. Out of the 121 members initially announced as the part of the institute, more than 100 were white and majority of them were male even as women and people of color had pioneered AI ethics and safety.

Emphasizing on the importance of interdisciplinary research in AI, Desmond U. Patton, the director of SAFElab, shared his experience of working on AI systems as a non-tech person. Through this blog post, he also encouraged his fellow social workers and non-tech people to contribute in AI research to make AI more inclusive.

For the past 6 years, Patton with his colleagues from computer science and data science fields worked on co-designing AI systems aimed to understand the underlying causes of community-based violence. He believes that social media posts can prove to be very helpful in identifying people who are at risk of getting involved in gun violence. So, he created an interdisciplinary group of researchers who, with the help of AI techniques, study the language and images in social-media posts to identify patterns of grieving and anger.

Patton believes that having a domain expert in the team is important. All the crucial decisions related to the AI system such as concepts that should be analyzed, framing of those concepts, and error analysis of outputs should be taken jointly. His team also worked with community groups and people who previously worked for gangs involved in gun violence to co-design the AI systems. They hired community members and advisory teams and valued their suggestions, critiques, and ideas in shaping up the AI systems.

Algorithmic and workforce bias has led to a lot of controversies in the recent years, including facial recognition systems misidentifying black women. Looking at these cases, Joy Buolamwini founded Algorithmic Justice League (AJL), a collective that focuses on creating more inclusive and ethical AI systems. AJL researches about algorithmic bias, provides a platform to people for raising their concerns and experiences with coded discrimination, and runs algorithmic audits to hold the companies accountable.

Though it has not become a norm, the concept of interdisciplinary research is surely gaining attention of several researchers and technologists. At EmTech Digital, Rediet Abebe, a computer science researcher at Cornell University, said, “We need adequate representation of communities that are being affected. We need them to be present and tell us the issues they’re facing.”

She further added, “We also need insights from experts from areas including social sciences and the humanities … they’ve been thinking about this and working on this for longer than I’ve been alive. These missed opportunities to use AI for social good—these happen when we’re missing one or more of these perspectives.

Abebe has cofounded Mechanism Design for Social Good, a multi-institution, interdisciplinary research group that aims to improve access to opportunities for those who have been historically underserved and marginalized. The organization has worked on several areas including global inequality, algorithmic bias and discrimination, and the impact of algorithmic decision-making on specific policy areas including online labor markets, health care, and housing.

AI researchers and developers need to collaborate with social sciences and underserved communities. We need to have those people affected by these systems to have a say in building these systems. The more different people we have, the more perspectives we get. And, this is the type of team that brings more innovation to technology.

Read Patton’s full article on Medium.

Read Next

Professors from MIT and Boston University discuss why you need to worry about the ‘wrong kind of AI’

Is China’s facial recognition powered airport kiosks an attempt to invade privacy via an easy flight experience?

How AI is transforming the Smart Cities IoT? [Tutorial]