6 min read

The AI Now Institute, New York University, released its third annual report on the current state of AI, yesterday.  2018 AI Now Report focused on themes such as industry AI scandals, and rising inequality. It also assesses the gaps between AI ethics and meaningful accountability, as well as looks at the role of organizing and regulation in AI.

Let’s have a look at key recommendations from the AI Now 2018 report.

Key Takeaways

Need for a sector-specific approach to AI governance and regulation

This year’s report reflects on the need for stronger AI regulations by expanding the powers of sector-specific agencies (such as United States Federal Aviation Administration and the National Highway Traffic Safety Administration) to audit and monitor these technologies based on domains.

Development of AI systems is rising and there aren’t adequate governance, oversight, or accountability regimes to make sure that these systems abide by the ethics of AI. The report states how general AI standards and certification models can’t meet the expertise requirements for different sectors such as health, education, welfare, etc, which is a key requirement for enhanced regulation.

“We need a sector-specific approach that does not prioritize the technology but focuses on its application within a given domain”, reads the report.

Need for tighter regulation of Facial recognition AI systems

Concerns are growing over facial recognition technology as they’re causing privacy infringement, mass surveillance, racial discrimination, and other issues. As per the report, stringent regulation laws are needed that demands stronger oversight, public transparency, and clear limitations. Moreover, only providing public notice shouldn’t be the only criteria for companies to apply these technologies. There needs to be a “high threshold” for consent, keeping in mind the risks and dangers of mass surveillance technologies.

The report highlights how “affect recognition”, a subclass of facial recognition that claims to be capable of detecting personality, inner feelings, mental health, etc, depending on images or video of faces, needs to get special attention, as it is unregulated. It states how these claims do not have sufficient evidence behind them and are being abused in unethical and irresponsible ways.“Linking affect recognition to hiring, access to insurance, education, and policing creates deeply concerning risks, at both an individual and societal level”, reads the report.

It seems like progress is being made on this front, as it was just yesterday when Microsoft recommended that tech companies need to publish documents explaining the technology’s capabilities, limitations, and consequences in case their facial recognition systems get used in public.

New approaches needed for governance in AI

The report points out that internal governance structures at technology companies are not able to implement accountability effectively for AI systems.

“Government regulation is an important component, but leading companies in the AI industry also need internal accountability structures that go beyond ethics guidelines”, reads the report.  This includes rank-and-file employee representation on the board of directors, external ethics advisory boards, along with independent monitoring and transparency efforts.

Need to waive trade secrecy and other legal claims

The report states that Vendors and developers creating AI and automated decision systems for use in government should agree to waive any trade secrecy or other legal claims that would restrict the public from full auditing and understanding of their software. As per the report, Corporate secrecy laws are a barrier as they make it hard to analyze bias, contest decisions, or remedy errors. Companies wanting to use these technologies in the public sector should demand the vendors to waive these claims before coming to an agreement.

Companies should protect workers from raising ethical concerns

It has become common for employees to organize and resist technology to promote accountability and ethical decision making. It is the responsibility of these tech companies to protect their workers’ ability to organize, whistleblow, and promote ethical choices regarding their projects.

“This should include clear policies accommodating and protecting conscientious objectors, ensuring workers the right to know what they are working on, and the ability to abstain from such work without retaliation or retribution”, reads the report.

Need for more in truth in advertising of AI products

The report highlights that the hype around AI has led to a gap between marketing promises and actual product performance, causing risks to both individuals and commercial customers.

As per the report, AI vendors should be held to high standards when it comes to them making promises, especially when there isn’t enough information on the consequences and the scientific evidence behind these promises.

Need to address exclusion and discrimination within the workplace

The report states that the Technology companies and the AI field focus on the “pipeline model,” that aims to train and hire more employees.

However, it is important for tech companies to assess the deeper issues such as harassment on the basis of gender, race, etc, within workplaces. They should also examine the relationship between exclusionary cultures and the products they build, so to build tools that do not perpetuate bias and discrimination.

Detailed account of the “full stack supply chain”

As per the report, there is a need to better understand the parts of an AI system and the full supply chain on which it relies for better accountability. “This means it is important to account for the origins and use of training data, test data, models, the application program interfaces (APIs), and other components over a product lifecycle”, reads the paper.

This process is called accounting for the ‘full stack supply chain’ of AI systems, which is necessary for a more responsible form of auditing. The full stack supply chain takes into consideration the true environmental and labor costs of AI systems. This includes energy use, labor use for content moderation and training data creation, and reliance on workers for maintenance of AI systems.

More funding and support for litigation, and labor organizing on AI issues

The report states that there is a need for increased support for legal redress and civic participation.

This includes offering support to public advocates representing people who have been exempted from social services because of algorithmic decision making, civil society organizations and labor organizers who support the groups facing dangers of job loss and exploitation.

Need for University AI programs to expand beyond computer science discipline

The report states that there is a need for university programs and syllabus to expand its disciplinary orientation. This means the inclusion of social and humanistic disciplines within the universities AI programs. For AI efforts to truly make social impacts, it is necessary to train the faculty and students within the computer science departments, to research the social world. A lot of people have already started to implement this, for instance, Mitchell Baker, chairwoman, and co-founder of Mozilla talked about the need for the tech industry to expand beyond the technical skills by bringing in humanities.

“Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations”, reads the paper.

For more coverage, check out the official AI Now 2018 report.

Read Next

Unity introduces guiding Principles for ethical AI to promote responsible use of AI

Teaching AI ethics – Trick or Treat?

Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms

Tech writer at the Packt Hub. Dreamer, book nerd, lover of scented candles, karaoke, and Gilmore Girls.