23 criteria of artificial intelligence
In January 2017, at the Beneficial AI Conference in Asiloma, California, a group of experts in artificial intelligence and robotics jointly signed 23 principles of Asiloma artificial intelligence, calling on the world to strictly abide by these principles while developing artificial intelligence and jointly safeguard the ethics, interests and security of human beings in the future. Asilomar AI Principles (AI Principles) is an extended version of Asilomar's three famous robotic principles.
These 23 guidelines, led by the Institute for the Future of Life, are designed to ensure that human beings can successfully avoid potential risks when new technologies emerge. Its prominent core members include Stephen Hawking and Elon Musk. The organization focuses on potential threats posed by new technologies and problems, such as artificial intelligence, biotechnology, nuclear weapons and climate change.
The content of this article is taken from the website https://futureoflife.org/ai-principles/, and its Chinese translation has been corrected. Some personal opinions have been published. Welcome to discuss.
Research Issues
1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
Research Objectives: The goal of AI research should not be to build aimless intelligence, but to build useful intelligence.
(Comment: Limiting AI development goals to be beneficial and avoiding possible runaway developments)
2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
Research Funds: Investments in AI should be accompanied by funds to ensure that they are used for useful purposes. These include thorny issues in computer, economy, law, ethics and society, such as:
(Comment: To the effect that you spend a hundred dollars on AI, you should take one piece or two to study moral and ethical derivatives.)
· How can we make future AI systems highly robust, so what do we want without malfunctioning or get hacked?
How can we make future AI systems highly robust so that they can operate according to our wishes without failures and hackers?
(Comment: Worried about AI failures, making major unexpected actions or being exploited by hackers)
· How can we grow our prosperity through automation while maintaining people's resources and purpose?
How to maintain human resources and goals while making us prosperous through automation?
(Comment: Fear of AI robbing human resources)
· How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
With the development of artificial intelligence, how can we more fairly and effectively amend the legal system and manage the risks associated with artificial intelligence?
· What set of values should AI be aligned with, and what legal and ethical status should it have?
What value systems should AI follow? What kind of legal and ethical state should AI have?
3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.
Links between science and policy: A positive and constructive communication should be formed between AI researchers and policy makers.
(Comment: Scientists and politicians need to work together on this issue)
4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
R&D culture: A culture of cooperation, trust and transparency should be fostered among AI R&D personnel.
(Comment: Researchers shouldn't pinch each other either.)
5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
Competition Avoidance: Teams developing AI systems should actively cooperate with each other to avoid jerrying on safety standards.
(Comment: Fear that some teams neglect AI security for speed-up)
Ethics and Values Moral Standards and Values
6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
Security: Artificial intelligence systems should be safe throughout the life cycle, and verify their practicability and feasibility.
7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
Failure transparency: If an AI system causes damage, there should be a way to identify the cause.
(Comment: This is actually a bit difficult. For deep networks, designers don't understand how it works.)
8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
Judgment transparency: In judicial decisions, where autonomous systems are involved, a convincing explanation should be provided and audited by a competent authority.
(Comment: All AI-related judgments must be presented by authoritative persons)
9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape
Please read the Chinese version for details.