4 min read

The Public Voice Coalition, an organization that promotes public participation in decisions regarding the future of the Internet, came out with guidelines for AI, namely, Universal Guidelines on Artificial Intelligence (UGAI), today. The UGAI were announced at the currently ongoing, 40th International Data Protection and Privacy Commissioners Conference (ICDPPC), in Brussels, Belgium, today.

The ICDPPC is a worldwide forum where independent regulators from around the world come together to explore high-level recommendations regarding privacy, freedom, and protection of data. These recommendations are addressed to governments and international organizations. The 40th ICDPPC has speakers such as Tim Berners Lee (director of the world wide web), Tim Cook (Apple Inc, CEO), Giovanni Butarelli (European Data Protection Supervisor), and Jagdish Singh Khehar (44th Chief Justice of India) among others attending the conference.

The UGAI combines the elements of human rights doctrine, data protection law, as well as ethical guidelines.

“We propose these Universal Guidelines to inform and improve the design and use of AI. The Guidelines are intended to maximize the benefits of AI, to minimize the risk, and to ensure the protection of human rights. These guidelines should be incorporated into ethical standards, adopted in national law and international agreements, and built into the design of systems”, reads the announcement page.

The UGAI comprises twelve different principles for AI governance that haven’t been previously covered in similar policy frameworks.

Let’s have a look at these principles in UGAI.

Transparency principle

Transparency principle puts emphasis on an individual’s right to interpret the basis of a particular AI decision concerning them. This means all individuals involved in a particular AI project should have access to the factors, the logic, and techniques that produced the outcome.

Right to human determination

The Right to human determination focuses on the fact that individuals and not machines should be responsible when it comes to automated decision-making. For instance, during the operation of an autonomous vehicle, it is impractical to include a human decision before the machine makes an automated decision. However, if an automated system fails, then this principle should be applied and human assessment of the outcome should be made to ensure accountability.

Identification Obligation

This principle establishes the foundation of AI accountability and makes the identity of an AI system and the institution responsible quite clear. This is because an AI system usually knows a lot about an individual. But, the individual might now even be aware of the operator of the AI system.

Fairness Obligation

The Fairness Obligation puts an emphasis on how the assessment of the objective outcomes of the AI system is not sufficient to evaluate an AI system. It is important for the institutions to ensure that AI systems do not reflect unfair bias or make any discriminatory decisions.

Assessment and accountability Obligation

This principle focuses on assessing an AI system based on factors such as its benefits, purpose, objectives, and the risks involved before and during its deployment. An AI system should be deployed only after this evaluation is complete. In case the assessment reveals substantial risks concerning Public Safety and Cybersecurity, then the AI system should not be deployed. This, in turn, ensures accountability.

Accuracy, Reliability, and Validity Obligations

This principle focuses on setting out the key responsibilities related to the outcome of automated decisions by an AI system. Institutions must ensure the accuracy, reliability, and validity of decisions made by their AI system.

Data Quality Principle

This puts an emphasis on the need for institutions to establish data provenance. It also includes assuring the quality and relevance of the data that is fed into the AI algorithms.

Public Safety Obligation

This principle ensures that institutions assess the public safety risks arising from AI systems that control different devices in the physical world. These institutions must implement the necessary safety controls within such AI systems.

Cybersecurity Obligation

This principle is a follow up to the Public Safety Obligation and ensures that institutions developing and deploying these AI systems take cybersecurity threats into account.

Prohibition on Secret Profiling

This principle states that no institution shall establish a secret profiling system. This is to ensure the possibility of independent accountability.

Prohibition on Unitary Scoring

This principle states that no national government shall maintain a general-purpose score on its citizens or residents. “A unitary score reflects not only a unitary profile but also a predetermined outcome across multiple domains of human activity,” reads the guideline page.

Termination Obligation

Termination Obligation states that an institution has an affirmative obligation to terminate the AI system built if human control of that system is no longer possible.

For more information, check out the official UGAI documentation.

Read Next

The ethical dilemmas developers working on Artificial Intelligence products must consider

Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms

Introducing Deon, a tool for data scientists to add an ethics checklist