Nuclear Weapons 689 - Dangers Of AI To The Control Of Nuclear Arsenals
Many movies, TV shows and novels feature military artificial intelligence systems that are hacked to either start a nuclear war or create the illusion that a nuclear war has been started which invites retaliation. With advances in nuclear weapons systems and artificial intelligence, what had been fiction is moving closer to being a real threat to the world.
The International Campaign to Abolish Nuclear Weapons (ICANW) is a non-profit organization dedicated to the elimination of nuclear weapons. The ICANW won the Nobel Peace Prize in 2017 for its fight against nuclear weapons.
Beatrice Fihn is the executive director of the ICANW. She recently raised concerns about AI and nuclear weapons. She is afraid that the dangers of hackers either launching an attack or tricking a nuclear armed nation into launching an attack is increasing and needs to discussed and dealt with. We now have computer programs that can create fake videos in which anyone can be made to say anything. Hackers might use this “deep fake” technology to fool leaders of a nation with nuclear weapons to believe that leaders of another country are preparing a pre-emptive nuclear strike.
Fihn told an interviewer that she would like to call a meeting this fall with nuclear weapons experts and some of the leading companies in AI and cybersecurity. This event would be “off-the-record” but would produce a document that ICANW could distribute to governments and other organizations and individuals which would warn them of the danger.
She said, “Some companies are more powerful than governments today in terms of shaping the world,” Fihn said. She wants to “engage them in thinking about how they can contribute to a more sustainable world, one that reduces the threat of extinction.” So far, Microsoft, Google’s Deep Mind AI division and other companies leading AI research have expressed an interest in contributing to the ICANW project but decline to comment to the press. Fihn says that some of these companies know that it is an important subject, but they are intimidated by it.
Generally, AI has a good reputation and is described as being a boon to the world. Extensive use in medicine are improving therapies and self-driving cars will reduce accidents. But there is also a fear of AI out of control that has been portrayed in the media and is a subject of debate among intellectuals. Fihn says “We don’t want to advocate for any restrictions on A.I. But this technological development is happening—we have to be very careful.” Fihn wonders if AI poses realistic dangers or are concerns being driven by overactive imaginations.
Fihn points out that there is a great deal of secrecy involved in the control systems for launching nuclear weapons. This makes it difficult to know just how far AI has penetrated into these critical systems. It is generally understood that hacking is a serious danger to the control of nuclear arsenals.
Many years ago, a cynic coined the term “cybercrud.” He defined it as blaming computer systems when it is really human incompetence that caused problems. I think it is safe to say that we have as much to fear from human error as from AI.