On Monday, Stanford University launched the new Institute for Human-Centered Artificial Intelligence (HAI) to augment humanity with AI. The institute aims to study, guide and develop human-centered artificial intelligence technologies and applications and advance the goal of a better future for humanity through AI, according to the announcement.
Its co-leaders are John Etchemendy professor of philosophy and a former Stanford University provost, and Fei-Fei Li, who is a computer science professor and a former Chief Scientist for Google Cloud AI and ML.
“So much of the discussion about AI is focused narrowly around engineering and algorithms… We need a broader discussion: something deeper, something linked to our collective future. And even more importantly, that broader discussion and mindset will bring us a much more human-centered technology to make life better for everyone.” Li explains in a blog post.
The institute was launched at a symposium on campus, and it will include faculty members from all seven schools at Stanford — including the School of Medicine — and will work closely with companies in a variety of sectors, including health care, and with organizations such as AI4All.
“Its biggest role will be to reach out to the global AI community, including universities, companies, governments and civil society to help forecast and address issues that arise as this technology is rolled out,” said Etchemendy, in the announcement. “We do not believe we have answers to the many difficult questions raised by AI, but we are committed to convening the key stakeholders in an informed, fact-based quest to find those answers.”
The symposium featured a star-studded speaker lineup that included industry titans Bill Gates, Reid Hoffman, Demis Hassabis, and Jeff Dean, as well as dozens of professors in fields as diverse as philosophy and neuroscience. Even California Governor, Gavin Newsom made an appearance, giving the final keynote speech. As the audience of the event included former Secretaries of State Henry Kissinger and George Shultz, former Yahoo CEO Marissa Mayer, and Instagram Co-founder Mike Krieger.
Any AI initiative that government, academia, and industry all jointly support is good news for the future of the tech field. HAI differs from many other AI efforts in that its goal is not to create AI rivaling humans in intelligence, but rather to find ways where AI can augment human capabilities and enhance human productivity and quality of life.
If you missed the event, you can view a video recording here.
Institute aims to become a representative of humanity but ends up being claimed as exclusionary
While the Institute’s mission stated “The creators and designers of AI must be broadly representative of humanity.” It has been noticed that the institute has 121 faculty members listed on their website, and not a single member of Stanford’s new AI faculty is black.
Stanford just launched their Institute for Human-Centered Artificial Intelligence (@StanfordHAI) with great fanfare. The mission: "The creators and designers of AI must be broadly representative of humanity."
121 faculty members listed.
Not a single faculty member is Black. pic.twitter.com/znCU6zAxui
— Chad Loder ❁ (@chadloder) March 21, 2019
There were questions as to why so many of the most influential people in the Valley decided to align with this center and publicly support it, and why this center aims to raise $1 billion to further its efforts. What does this center offer such a powerful group of people?
“The people who spoke at the launch event and the staff at the Center are the people who created the problems we face, and the problems we face have only been brought to light by diligent journalists, tech worker organizers, and communities affected…” 😶 https://t.co/8uOrBVYHMG
— Anna Lauren Hoffmann (@annaeveryday) March 21, 2019
The moment such comments were made on Twitter the institute’s website was quickly updated to include one previously unlisted faculty member, Juliana Bidadanure, an assistant professor of philosophy. Bidadanure was not listed among the institute’s staff prior, according to a version of the page preserved on the Internet Archive’s Wayback Machine, and Juliana also spoke at the institute’s opening event.
It is imperative to say that we live in an age where predictive policing is real and can disproportionately hit minority communities, job hiring is handled by AI and can discriminate against women. We closely know about Google and Facebook’s algorithms deciding on what information we see and which conspiracy theory YouTube serves up next. But the algorithms making those decisions are closely guarded company secrets with global impact.
In Silicon Valley and the broader Bay Area, the conversation and the speakers have shifted. It’s no longer a question of if technology can discriminate. The questions now include who can be impacted, how we can fix it, and what are we even building anyway?
When a group of mostly white engineers gets together to build these systems, the impact on marginalized groups is particularly stark. Algorithms can reinforce racism in domains like housing and policing. Recently Facebook announced that the platform has removed targeting ads related to protected classes such as race, ethnicity, sexual orientation, and religion. Algorithm bias mirrors what we see in the real world. Artificial intelligence mirrors its developers and the data sets its trained on.
Where there used to be a popular mythology that algorithms were just technology’s way of serving up objective knowledge, there’s now a loud and increasingly global argument about just who is building the tech and what it’s doing to the rest of us.
The stated goal of Stanford’s new human-AI institute is admirable. But to get to a group that is truly “broadly representative of humanity,” they’ve got miles to go.