Last week, the Beijing Academy of Artificial Intelligence (BAAI) released 15-point principles calling for Artificial Intelligence to be beneficial and responsible termed as Beijing AI Principles. It has been proposed as an initiative for the research, development, use, governance and long-term planning of AI. The article is a well-described guideline on the principles to be followed for the research and development of AI, the use of AI, and the governance of AI.
The Beijing Academy of Artificial Intelligence (BAAI) is an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government. These principles have been developed in collaboration with Peking University, Tsinghua University, the Institute of Automation and Institute of Computing Technology within the Chinese Academy of Sciences, and China’s three big tech firms: Baidu, Alibaba, and Tencent.
Research and Development
- Do Good
It states that AI should be developed to benefit all humankind and the environment, and to enhance the well-being of society and ecology.
- For Humanity
AI should always serve humanity and conform to human values as well as the overall interests of humankind. It also specifies that AI should never go against, utilize or harm human beings.
- Be Responsible
Researchers while developing AI should be aware of its potential ethical, legal, and social impacts and risks. They should also be provided with concrete actions to reduce and avoid them.
- Control Risks
AI systems should be developed in a way that ensures the security of data along with the safety and security for the AI system itself.
- Be Ethical
AI systems should be trustworthy, in a way that the system can be traceable, auditable and accountable.
- Be Diverse and Inclusive
The development of AI should reflect diversity and inclusiveness, such that nobody is easily neglected or underrepresented in AI applications.
- Open and Share
An open AI platform will help avoid data/platform monopolies, and share the benefits of AI development.
Use of AI
- Use Wisely and Properly
The users of AI systems should have sufficient knowledge and ability to avoid possible misuse and abuse, so as to maximize its benefits and minimize the risks.
AI systems should be developed such that in an unexpected circumstance, the users’ own rights and interests are not compromised.
- Education and Training
Stakeholders of AI systems should be educated and trained to help them adapt to the impact of AI development in psychological, emotional and technical aspects.
Governance of AI
- Optimizing Employment
Developers should have a cautious attitude towards the potential impact of AI on human employment. Explorations on Human-AI coordination and new forms of work should be encouraged.
- Harmony and Cooperation
This should be imbibed in an AI governance ecosystem, so as to avoid malicious AI race, to share AI governance experience, and to jointly cope with the impact of AI with the philosophy of “Optimizing Symbiosis”.
- Adaptation and Moderation
Revisions of AI principles, policies, and regulations should be actively considered to adjust them to the development of AI. This will prove beneficial to society and nature.
- Subdivision and Implementation
Various fields and scenarios of AI applications should be actively researched, so that more specific and detailed guidelines can be formulated.
- Long-term Planning
Constant research on the potential risks of Augmented Intelligence, Artificial General Intelligence (AGI) and Superintelligence should be encouraged, which will make AI always beneficial to society and nature in the future.
These AI principles are aimed at enabling the healthy development of AI, in such a way that it supports the human community, for a shared future. This will prove beneficial for humankind and nature, in general.
China releasing its version of AI principles, has come as a surprise for many. China has always been infamous for using AI to monitor citizens. This move by China comes after the European High-Level Expert Group on AI released ‘Ethics guidelines for trustworthy AI’ , this year. The Beijing AI Principles provided by BAAI, is similar to the AI principles published by Google last year. Google’s AI principles also provided a guideline for AI applications, such that it becomes beneficial for humans.
By releasing its own version of AI principles, is China signalling the world that its ready to talk about AI ethics, especially after the U.S. blacklisted China’s telecom giant Huawei over threat to national security.
As expected, users are also surprised with China showing this sudden care towards AI ethics.
Beijing AI Principles. Automated decision making & computer vision are already used to violate human rights in Xinxiang. It does not protect privacy, dignity, autonomy, freedom. Oh, that's just in R&D. Ok, no informed consent then. So it's just for AGI? https://t.co/nfMFutScet
— Şerife (Sherry) Wong (@sherrying) May 29, 2019
There are some laudable and some rather bizarre elements to this 'Beijing consensus' on AI principles, but I want to point out the phrasing that emphasizes promoting AI's "healthy development to support the construction of a community of common destiny." https://t.co/TsgJiyjnkZ
— Elsa B. Kania (@EBKania) May 30, 2019
While others are impressed with this move by China.
Beijing AI Principles – 新闻 useful update on Chinese thinking on AI principles and approaches from the Beijing Academy of AI. Europe will have to earn the moral high ground on AI ethics. https://t.co/6rC3ZxsSpG
— Tim Gordon (@t_gordon) June 3, 2019
Beijing Academy of AI publishes AI Principles. Ethics increasingly important in Chinese AI development https://t.co/hdl6pmwqis
— Menelaos Mazarakis (@mgmazarakis) May 30, 2019
Visit the BAAI website, to read more details of the Beijing AI Principles.