Google yesterday announced a new external advisory board to help monitor the company’s use of artificial intelligence for ways in which it may violate ethical principles it laid out last summer. The group was announced by Kent Walker, Google’s senior vice president of global affairs, and it includes experts on a wide-ranging series of subjects, including mathematics, computer science, philosophy, psychology, and even foreign policy. Following is the complete list of the advisory council appointed by Google:
- Alessandro Acquisti, a leading behavioral economist and privacy researcher.
- Bubacarr Bah, an expert in applied and computational mathematics
- De Kai, a leading researcher in natural language processing, music technology and machine learning
- Dyan Gibbens, an expert in industrial engineering and CEO of Trumbull
- Joanna Bryson, an expert in psychology and AI, and a longtime leader in AI ethics
- Kay Coles James, a public policy expert with extensive experience working at the local, state and federal levels of government
- Luciano Floridi, a leading philosopher and expert in digital ethics
- William Joseph Burns, a foreign policy expert and diplomat
The group will be called the Advanced Technology External Advisory Council, and it appears Google wants it to be seen as an independent watchdog keeping an eye on how it deploys AI in the real world. It wants to focus on facial recognition technology and mitigation of built-in bias in machine learning training methods.
“This group will consider some of Google’s most complex challenges that arise under our AI Principles … providing diverse perspectives to inform our work,” Walker writes.
Behind the selection of the council
As for the members, the names may not be easily recognizable to those outside academia. However, the credentials of the board appear to be of the highest caliber, with resumes that include multiple presidential administration positions and stations at top-notch universities spanning University of Oxford, Hong Kong University of Science and Technology, and UC Berkeley.
Having said that, the selection of the Heritage Foundation President Kay Coles James and CEO of Trumbull Dyan Gibbens received harsh criticism on Twitter. It has been noted that James, through her involvement with the conservative think tank, has espoused anti-LGBTQ rhetoric on her public Twitter profile:
Hey @Google exactly what do you hope to get out of a responsibility advisor who hates trans people, foreigners, and the environment, just wondering pic.twitter.com/4ksWdCRZ5N
— Os (@farbandish) March 26, 2019
Google AI ethics board has: AI ethics 15+ years & uncompromising @j2bryson, privacy star @ssnstudy, top information philospher @floridi, 2 more AI profs: De Kai Wu has done ethics work, Barr hasn't, respectable diplomat Bill Burr, a drone firm Ivanka fan, and a think tank bigot.
— Eerke Boiten (@EerkeBoiten) March 26, 2019
One of the members, Joanna Bryson also expressed astonishing comments on Twitter for being selected as a part of the council. Joanna states, she has no idea of what she is getting into but she will certainly do her best.
You're getting into it with the head of the Heritage Foundation, an organization who is anti-LGBT, anti-environment, and anti-immigrant. Is that really the best company to keep on these issues?
— Luke Stark PhD (@luke_stark) March 26, 2019
Google’s history of controversies
Last year, Google found itself embroiled in controversy over its participation in a US Department of Defense drone program called Project Maven. Following immense internal backlash and external criticism for putting employees to work on AI projects that may involve the taking of human life, Google decided to end its involvement in Maven following the expiration of its contract.
It also put together a new set of guidelines, what CEO Sundar Pichai dubbed Google’s AI Principles, that would prohibit the company from working on any product or technology that might violate “internationally accepted norms” or “widely accepted principles of international law and human rights.”
“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai wrote at the time. “How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.” Google effectively wants its AI research to be “socially beneficial,” and that often means not taking government contracts or working in territories or markets with notable human rights violations.
Regardless, Google found itself in yet another similar controversy related to its plans to launch a search product in China, one that may involve deploying some form of artificial intelligence in a country currently trying to use that very same technology to surveil and track its citizens. Google’s pledge differs from the stances of Amazon and Microsoft, both of which have said they will continue to work the US government. Microsoft has secured a $480 million contract to provide HoloLens headsets to the Pentagon, while Amazon continues to sell its Rekognition facial recognition software to law enforcement agencies.
Google also formed a “responsible innovation team” internally that Walker says has reviewed hundreds of different launches to-date, some of which have aligned with its principles while others haven’t. For example, that team helped Google make the decision not to sell facial recognition technology until there’s been more ethical and policy debate on the issue.
Why critics are skeptical of this move?
Rashida Richardson, director of policy research at AI Now Institute, expressed skepticism about the ambiguity of Google and other companies’ AI principles at the MIT Technology Review Conference held in San Francisco on Tuesday.
For example, Google’s document leans heavily on the word “appropriate.”
“Who is defining what appropriate means?” she asked.
Walker said that Google’s new council is meant to foster more defined discussion.
He added that the company had over 300 people looking at machine learning fairness issues.
“We’re doing our best to put our money where our mouth is,” Kent said.
Google has previously had embarrassing technology screw-ups driven by bias in its machine learning systems, like when its photos algorithm labeled black people as gorillas.
It would not be wrong to say that today’s announcement — which perhaps not coincidentally comes a day after Amazon said it would earmark $10 million with the National Science Foundation for AI fairness research, and after Microsoft executive Harry Shum said the company would add an ethics review focusing on AI issues to its standard product audit checklist — appears to be an attempt by Google to fend off broader, continued criticism of private sector AI pursuits.
Between this, NSF Amazon AI partnership, and the attendees at the Stanford Institute for Human-Centered Artificial Intelligence kickoff, it's a heck of a fox guarding the henhouse time. No disrespect intended to literal foxes. https://t.co/EdNOVF4mi3
— Sean Munson (@smunson) March 26, 2019
Thoughtful decisions require careful and nuanced consideration of how the AI principles … should apply, how to make tradeoffs when principles come into conflict, and how to mitigate risks for a given circumstance,” says Walker in an earlier blog post.
Read Next
Google announces Stadia, a cloud-based game streaming service, at GDC 2019
Google to be the founding member of CDF (Continuous Delivery Foundation)