2017年1月,在加利福尼亚州阿西洛马举行的Beneficial AI会议上,近干名人工智能和机器人领域的专家,联合签署了阿西洛马人工智能23条原则,呼吁全世界在发展人工智能的同时严格遵守这些原则,共同保障人类未来的伦理、利益和安全。阿西洛马人工智能原则(Asilomar AI Principles)是著名的阿西莫夫的机器人三大法则的扩展版本。
这23条准则由“生命未来研究所”牵头制定出,旨在确保人类在新技术出现时能顺利规避其潜在的风险。其突出核心成员有Stephen Hawking和Elon Musk等。这个组织专注于由新技术和问题构成的潜在威胁,如人工智能、生物技术、核武器和气候变化等。
本文内容取自该所网站https://futureoflife.org/ai-principles/,对其中文翻译进行了更正,并发表了部分个人见解,欢迎大家讨论。
Research Issues 研究问题
1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
研究目标: 人工智能研究的目标不应当是建立漫无方向的智能,而是建立有益的智能。
(评论:限定AI发展目标为有益性的,避免随意发展可能导致的失控)
2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
研究资金: 对人工智能的投资,应当伴随有资金进行确保其用于有益用途的研究,这些研究内容包括计算机、经济、法律、伦理和社会等方面的棘手问题,诸如:
(评论:大意是你花一百块钱研究AI,应该要拿出一块两块的研究道德伦理等衍生问题)
· How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
我们如何使得未来人工智能系统高度强健,从而按照我们的意愿运作而不会发生故障和被黑客入侵?
(评论:担心AI出故障做出出乎意料的重大行动或者被黑客利用)
· How can we grow our prosperity through automation while maintaining people’s resources and purpose?
在通过自动化来使我们繁荣的同时,如何维持人类的资源和目标?
(评论:担心AI抢夺人类资源)
· How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
随着人工智能的发展,我们如何更公平和有效地修正法律系统以及管理与人工智能相关的风险?
· What set of values should AI be aligned with, and what legal and ethical status should it have?
人工智能应该遵守哪些价值体系?人工智能应当具有什么样的的法律和伦理状态?
3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.
科学与政策链接:人工智能研究人员和政策制定者之间,应形成积极、有建设性的沟通。
(评论:科研人员和政客要携手处理这个问题)
4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
研发文化: 人工智能研发人员之间应该被培养一种互相合作、互相信任和互相透明的文化。
(评论:研究人员之间也不要相互掐架)
5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
竞争规避: 开发人工智能系统的团队应积极相互合作,以避免在安全标准上偷工减料。
(评论:担心有的团队为竞速而忽视AI安全性)
Ethics and Values 道德标准和价值观念
6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
安全性: 人工智能系统在整个生命周期内应当是安全的,并且可验证其实用性和可行性。
7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
故障透明: 如果一个人工智能系统引起损害,应该有办法查明原因。
(评论:这个其实有点难,对于深层网络,设计者也搞不懂它究竟是怎么工作的)
8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
判决透明: 在司法裁决中,但凡涉及自主系统,都应提供一个有说服力的解释,并由一个有能力胜任的权威人员进行审计。
(评论:凡AI相关判决必得权威人士出面)
9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
职责: 高级人工智能系统的设计者和建造者是其使用、滥用和行动过程中道德含义的权益方,他们有责任和机会塑造这些道德含义。
(评论:谁研究谁受益谁负责)
10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
价值观一致: 在设计高度自主的人工智能系统时,应当确保其目标和行为在整个运行过程中与人类价值观相一致。
11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
人类价值观: AI系统的设计和运作应符合人类尊严、权利、自由和文化多样性的理念。
12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
个人隐私: 既然人工智能系统能分析和利用数据,人们应该有权利获取、管理和控制他们产生的数据。
(评论:问题是咱们看得懂AI吐出的某些数据吗?)
13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
自由与隐私: 人工智能在个人数据方面的应用不能无故缩减人们的实际或感知的自由。
14) Shared Benefit: AI technologies should benefit and empower as many people as possible.
共享利益: 人工智能技术应该尽可能地使尽可能多的人受益和授权。
15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
共享繁荣: 人工智能创造的经济繁荣应该广泛的共享,造福全人类。
16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
人类控制: 应该由人类选择如何以及是否代表人工智能做决策,用来实现人为目标。
17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
非颠覆:通过控制高级人工智能系统所获得的权力,应当尊重和改善一个健康社会所依存的社会和公民进程,而不是颠覆它。
18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
人工智能军备竞赛: 应该避免一个使用致命自主武器的军备竞赛。
Longer-term Issues远期问题
19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
性能警示: 因为没有达成共识,我们应该强烈避免关于未来人工智能性能的假设上限。
(评论:也就是不能轻易承认AI的性能局限,以免低估它的负作用)
20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
重要性: 高级人工智能可代表地球上生命历史的深奥变化,应该计划和管理相应的关注和资源。
21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
风险: 对于人工智能造成的风险,尤其是那些灾难性的和毁灭性的风险,必须付出与其所造成的影响相称的努力,以用于规划和缓解风险。
22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
递归自我完善: 那些会递归地自我改进和自我复制的AI系统若能迅速增加质量或数量, 必须服从严格的安全控制措施。
23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.
共同利益: 超级智能的发展应当服务于广泛认同的道德观念,应当是为了全人类而不是一个国家或者一个组织的利益。
注:转载请注明出处并保持内容完整性
联系人:徐经理
手机:13907330718
电话:0731-22222718
邮箱:hniatcom@163.com
地址: 湖南省株洲市石峰区联诚路79号轨道智谷2号倒班房6楼603室