人工智能带来的新恐惧:当自主杀人机器人成为军方工具_OK阅读网
双语新闻
Bilingual News


双语对照阅读
分级系列阅读
智能辅助阅读
在线英语学习
首页 |  双语新闻 |  双语读物 |  双语名著 | 
[英文] [中文] [双语对照] [双语交替]    []        


人工智能带来的新恐惧:当自主杀人机器人成为军方工具
The Next Fear on A.I.: Hollywood’s Killer Robots Become the Military’s Tools

来源:纽约时报    2023-05-09 04:57



        When President Biden announced sharp restrictions in October on selling the most advanced computer chips to China, he sold it in part as a way of giving American industry a chance to restore its competitiveness.        去年10月宣布严格限制向中国出售最先进的计算机芯片时,拜登总统给出的解释是,这一定程度上是为了给美国工业一个恢复竞争力的机会。
        But at the Pentagon and the National Security Council, there was a second agenda: arms control.        但在五角大楼和国家安全委员会,还有另外一个议程:军备控制。
        If the Chinese military cannot get the chips, the theory goes, it may slow its effort to develop weapons driven by artificial intelligence. That would give the White House, and the world, time to figure out some rules for the use of artificial intelligence in sensors, missiles and cyberweapons, and ultimately to guard against some of the nightmares conjured by Hollywood — autonomous killer robots and computers that lock out their human creators.        从理论上说,如果中国军方拿不到芯片,可能会放慢研发人工智能武器的努力。这将让白宫和世界有时间制定出一些关于传感器、导弹和网络武器中使用人工智能的规则,终极目标则是防范好莱坞设想的那种噩梦——把自己的人类创造者关在门外的自主杀手机器人和计算机——变成现实。
        Now, the fog of fear surrounding the popular ChatGPT chatbot and other generative A.I. software has made the limiting of chips to Beijing look like just a temporary fix. When Mr. Biden dropped by a meeting in the White House on Thursday of technology executives who are struggling with limiting the risks of the technology, his first comment was “what you are doing has enormous potential and enormous danger.”        现在,围绕着广受欢迎的ChatGPT聊天机器人和其他生成式人工智能软件的恐惧迷雾让限制中国获取芯片看起来只是一个权宜之计。上周四,拜登在白宫参加了一个科技高管会议,这些高管们正在想方设法限制这项技术的风险。拜登的第一句话是,“你们正在做的事情具有巨大的潜力和巨大的危险。”
        It was a reflection, his national security aides say, of recent classified briefings about the potential for the new technology to upend war, cyber conflict and — in the most extreme case — decision-making on employing nuclear weapons.        他的国家安全助手表示,这是对最近有关这项新技术可能颠覆战争、网络冲突以及——在最极端的情况下,颠覆核武器使用决策的机密简报所做的反应。
        But even as Mr. Biden was issuing his warning, Pentagon officials, speaking at technology forums, said they thought the idea of a six-month pause in developing the next generations of ChatGPT and similar software was a bad idea: The Chinese won’t wait, and neither will the Russians.        但就在拜登发出警告的同时,五角大楼官员在科技论坛上表示,他们认为暂停开发下一代ChatGPT和类似软件六个月的想法是个坏主意:中国人不会等,俄罗斯人也不会等。
        “If we stop, guess who’s not going to stop: potential adversaries overseas,” the Pentagon’s chief information officer, John Sherman, said on Wednesday. “We’ve got to keep moving.”        “如果我们停下来,猜猜谁不会停下来:海外的潜在对手,”五角大楼的首席信息官约翰·舍曼上周三表示。“我们必须继续前进。”
        His blunt statement underlined the tension felt throughout the defense community today. No one really knows what these new technologies are capable of when it comes to developing and controlling weapons, and they have no idea what kind of arms control regime, if any, might work.        他的直白突显了当今整个国防界的紧张气氛。没有人真的清楚这些新技术在开发和控制武器方面的能力,而且也不了解什么样的军备控制制度(如果有的话)可能奏效。
        The foreboding is vague, but deeply worrisome. Could ChatGPT empower bad actors who previously wouldn’t have easy access to destructive technology? Could it speed up confrontations between superpowers, leaving little time for diplomacy and negotiation?        这种预感虽然还不明确,但令人深感担忧。ChatGPT会让那些以前不容易接触到破坏性技术的坏人得逞吗?它是否会加速超级大国之间的对抗,以致没有时间进行外交和谈判?
        “The industry isn’t stupid here, and you are already seeing efforts to self-regulate,” said Eric Schmidt, the former Google chairman who served as the inaugural chairman of the advisory Defense Innovation Board from 2016 to 2020.        “这个行业并不愚蠢,你已经看到了自我监管的努力,”谷歌前董事长埃里克·施密特说,他曾在2016年至2020年担任国防创新委员会首任主席。
        “So there’s a series of informal conversations now taking place in the industry — all informal — about what would the rules of A.I. safety look like,” said Mr. Schmidt, who has written, with former secretary of state Henry Kissinger, a series of articles and books about the potential of artificial intelligence to upend geopolitics.        “因此,该行业现在正在进行一系列非正式对话——所有这些都是非正式的——探讨人工智能的规则会是什么样的,”施密特说,他与前国务卿亨利·基辛格合作撰写了一系列文章和书籍,探讨人工智能颠覆地缘政治的潜力。
        The preliminary effort to put guardrails into the system is clear to anyone who has tested ChatGPT’s initial iterations. The bots will not answer questions about how to harm someone with a brew of drugs, for example, or how to blow up a dam or cripple nuclear centrifuges, all operations the United States and other nations have engaged in without the benefit of artificial intelligence tools.        任何测试过最初几个版本的ChatGPT的人都看到了将保护措施置入系统的努力。例如,机器人不会回答有关如何自制药物伤害他人的问题,或者如何炸毁大坝或破坏核离心机,美国和其他国家在没有人工智能工具的情况下都参与过这些行动。
        But those blacklists of actions will only slow misuse of these systems; few think they can completely stop such efforts. There is always a hack to get around safety limits, as anyone who has tried to turn off the urgent beeps on an automobile’s seatbelt warning system can attest.        但是这类行为黑名单只能减缓对系统的滥用;很少有人认为他们可以完全阻止这种行为。总是有办法绕过安全限制,任何试图关闭汽车安全带警告系统紧急蜂鸣声的人都可以证明这一点。
        Though the new software has popularized the issue, it is hardly a new one for the Pentagon. The first rules on developing autonomous weapons were published a decade ago. The Pentagon’s Joint Artificial Intelligence Center was established five years ago to explore the use of artificial intelligence in combat.        尽管这个新软件让更多人看到了这个问题,但对于五角大楼来说,这并不是什么新鲜事。关于开发自主武器的第一条规则在十年前发布。五角大楼的联合人工智能中心成立于五年前,旨在探索人工智能在战斗中的应用。
        Some weapons already operate on autopilot. Patriot missiles, which shoot down missiles or planes entering a protected airspace, have long had an “automatic” mode. It enables them to fire without human intervention when overwhelmed with incoming targets faster than a human could react. But they are supposed to be supervised by humans who can abort attacks if necessary.        一些武器已经处于自动巡航模式。击落进入受保护空域的导弹或飞机的爱国者导弹长期以来一直都有“自动”模式。这个模式使它们能够在没有人工干预的情况下被飞来的目标触发并开火,速度快于人类的反应。不过它们应该接受人类的监督,人类可以在必要时中止攻击。
        The assassination of Mohsen Fakhrizadeh, Iran’s top nuclear scientist, was conducted by Israel’s Mossad using an autonomous machine gun that was assisted by artificial intelligence, though there appears to have been a high degree of remote control. Russia said recently it has begun to manufacture — but has not yet deployed — its undersea Poseidon nuclear torpedo. If it lives up to the Russian hype, the weapon would be able to travel across an ocean autonomously, evading existing missile defenses, to deliver a nuclear weapon days after it is launched.        针对伊朗资深核科学家穆赫森·法赫里扎德的暗杀行动是由以色列的摩萨德使用由人工智能辅助的自动机枪执行的,尽管看起来存在很大程度的遥控操作。俄罗斯最近表示已经开始制造——但尚未部署——波塞冬核鱼雷。如果真的像俄罗斯鼓吹的那样,这种武器将能够自主穿越海洋,避开现有的导弹防御系统,在发射几天后投送核武器。
        So far there are no treaties or international agreements that deal with such autonomous weapons. In an era when arms control agreements are being abandoned faster than they are being negotiated, there is little prospect of such an accord. But the kind of challenges raised by ChatGPT and its ilk are different, and in some ways more complicated.        到目前为止,还没有针对此类自主武器的条约或国际协定。在这个时代,放弃军备控制协议的速度比谈判的速度还快,达成这样的协议的希望渺茫。但ChatGPT及其同类产品提出的挑战类型不同,而且在某些方面更为复杂。
        In the military, A.I.-infused systems can speed up the tempo of battlefield decisions to such a degree that they create entirely new risks of accidental strikes, or decisions made on misleading or deliberately false alerts of incoming attacks.        在军队中,注入人工智能的系统可以加快战场决策的速度,以至于它们会产生完全意料之外的打击风险,或者根据误导性或故意错误的攻击警报做出决策。
        “A core problem with A.I. in the military and in national security is how do you defend against attacks that are faster than human decision-making, and I think that issue is unresolved,” Mr. Schmidt said. “In other words, the missile is coming in so fast that there has to be an automatic response. What happens if it’s a false signal?”        “在军事和国家安全领域,人工智能的一个核心问题是,你如何防御比人类决策速度更快的攻击,我认为这个问题尚未解决,”施密特说。“换句话说,导弹来得太快了,必须要有自动反应。但如果这是一个错误的信号怎么办?”
        The Cold War was littered with stories of false warnings — once because a training tape, meant to be used for practicing nuclear response, was somehow put into the wrong system and set off an alert of a massive incoming Soviet attack. (Good judgment led to everyone standing down.) Paul Scharre, of the Center for a New American Security, noted in his 2018 book “Army of None” that there were “at least 13 near use nuclear incidents from 1962 to 2002,” which “lends credence to the view that near miss incidents are normal, if terrifying, conditions of nuclear weapons.”        冷战期间充斥着错报的故事——本应用于练习核反应的训练磁带不知何故被输入了错误的系统,引发了苏联大规模进攻的警报。(是良好的判断力让人们没有惊慌失措。)新美国安全中心的保罗·沙雷在他2018年出版的《无人军队》一书中指出,“从1962年到2002年,至少发生了13起险些使用核武器的事件”,这“证实了这样一种观点,即未遂事件是核武器的正常情况,即使这很恐怖。”
        For that reason, when tensions between the superpowers were a lot lower than they are today, a series of presidents tried to negotiate building more time into nuclear decision making on all sides, so that no one rushed into conflict. But generative A.I. threatens to push countries in the other direction, toward faster decision-making.        出于这个原因,在超级大国之间的紧张局势远低于今天的时代,历任总统都试图通过谈判为各方的核决策留出更多时间,这样就没有人会匆忙卷入冲突。但是生成命令的AI有可能将各国往反方向推,加快决策速度。
        The good news is that the major powers are likely to be careful — because they know what the response from an adversary would look like. But so far there are no agreed-upon rules.        好消息是,大国似乎会非常小心——因为他们知道对手的反应会是什么样子。但到目前为止,还没有经过商定的规则。
        Anja Manuel, a former State Department official and now a principal in the consulting group Rice, Hadley, Gates and Manuel, wrote recently that even if China and Russia are not ready for arms control talks about A.I., meetings on the topic would result in discussions of what uses of A.I. are seen as “beyond the pale.”        前国务院官员、赖斯、哈德利、盖茨和曼纽尔公司合伙人安雅·曼纽尔最近写道,中国和俄罗斯还没有准备好就人工智能进行军备控制谈判,但就这一主题进行会谈可以促进有关什么样的人工智能用途将被视为“过于危险”的讨论。
        Of course, the Pentagon will also worry about agreeing to many limits.        当然,五角大楼也会担心同意许多限制。
        “I fought very hard to get a policy that if you have autonomous elements of weapons, you need a way of turning them off,” said Danny Hillis, a computer scientist who was a pioneer in parallel computers that were used for artificial intelligence. Mr. Hillis, who also served on the Defense Innovation Board, said that Pentagon officials pushed back, saying, “If we can turn them off, the enemy can turn them off, too.”        计算机科学家丹尼·希利斯是用于人工智能的并行计算机的先驱,他说,“我非常努力地争取制定一项政策,如果你有武器的自主元件,你需要一种关闭它们的方法。”希利斯也曾在国防创新委员会任职。他说,五角大楼的官员不同意,他们说,“如果我们能把它们关掉,敌人也能把它们关掉。”
        The bigger risks may come from individual actors, terrorists, ransomware groups or smaller nations with advanced cyber skills — like North Korea — that learn how to clone a smaller, less restricted version of ChatGPT. And they may find that the generative A.I. software is perfect for speeding up cyberattacks and targeting disinformation.        更大的风险可能来自个人行为者、恐怖分子、勒索软件组织或拥有先进网络技术的小国——比如朝鲜——它们学会了如何克隆一个更小、限制更少的ChatGPT版本。他们可能会发现,生成式人工智能软件非常适合加速网络攻击和部署虚假信息。
        Tom Burt, who leads trust and safety operations at Microsoft, which is speeding ahead with using the new technology to revamp its search engines, said at a recent forum at George Washington University that he thought A.I. systems would help defenders detect anomalous behavior faster than they would help attackers. Other experts disagree. But he said he feared artificial intelligence could “supercharge” the spread of targeted disinformation.        负责微软信任与安全业务的汤姆·伯特最近在乔治·华盛顿大学的一个论坛上说,他认为人工智能系统将帮助防御者更快地发现异常行为,而不是帮助攻击者。微软正在加快使用这项新技术改造其搜索引擎。有些专家不同意这一观点。但他表示,他担心人工智能可能会“加速”有针对性的虚假信息的传播。
        All of this portends a new era of arms control.        所有这些都预示着军备控制新时代的到来。
        Some experts say that since it would be impossible to stop the spread of ChatGPT and similar software, the best hope is to limit the specialty chips and other computing power needed to advance the technology. That will doubtless be one of many different arms control plans put forward in the next few years, at a time when the major nuclear powers, at least, seem uninterested in negotiating over old weapons, much less new ones.        一些专家说,既然不可能阻止ChatGPT和类似软件的传播,最理想的出路是限制推进这项技术所需的专用芯片和其他计算能力。毫无疑问,这将是未来几年提出的许多不同的军备控制计划之一,而看起来各核大国目前对旧武器的谈判至少可以说缺乏兴趣,更不用提新武器了。
                
   返回首页                  

OK阅读网 版权所有(C)2017 | 联系我们