Author Topic: Artificial Intelligence, Real Risks: Understanding—and Mitigating—Vulnerabilities in the Military U  (Read 127 times)

0 Members and 1 Guest are viewing this topic.

rangerrebew

  • Guest
Artificial Intelligence, Real Risks: Understanding—and Mitigating—Vulnerabilities in the Military Use of AI

Nick Starck, David Bierbrauer and Paul Maxwell | 01.18.22

Editor’s note: This article is part of the Army Cyber Institute’s contribution to the series, “Compete and Win: Envisioning a Competitive Strategy for the Twenty-First Century.” The series endeavors to present expert commentary on diverse issues surrounding US competitive strategy and irregular warfare with peer and near-peer competitors in the physical, cyber, and information spaces. The series is part of the Competition in Cyberspace Project (C2P), a joint initiative by the Army Cyber Institute and the Modern War Institute. Read all articles in the series here.

Special thanks to series editors Capt. Maggie Smith, PhD, C2P director, and Dr. Barnett S. Koven.

No one likes to wake up in the morning, but now that artificial intelligence–powered algorithms set our alarms, manage the temperature settings in our homes, and select playlists to match our moods, snooze buttons are used less and less. AI safety-assist systems make our vehicles safer and AI algorithms optimize police patrols to make the neighborhoods we drive through, and live in, safer as well. All around us, AI is there, powering the tools and devices that shape our environment, augment and assist us in our daily routines, and nudge us to make choices about what to eat, wear, and purchase— with and without our consent. However, AI is also there when our smart devices start deciding who among us is suspicious, when a marginalized community is disproportionately targeted for police patrols, and when a self-driving car kills a jaywalker.

AI is becoming ubiquitous in daily life, and war is no exception to the trend. Reporting even suggests that the November 2020 assassination of the top Iranian nuclear scientist was carried out by an autonomous, AI-augmented rifle capable of firing up to six hundred rounds per minute. Russia and China are rapidly developing, and in some cases deploying, AI-enabled irregular warfare capabilities and it is only a matter of time before the same fissures, biases, and undesirable outcomes that are occurring with the AI systems that power our daily lives begin appearing in the AI systems used to wage war and designed to kill.

https://mwi.usma.edu/artificial-intelligence-real-risks-understanding-and-mitigating-vulnerabilities-in-the-military-use-of-ai/
« Last Edit: January 31, 2022, 12:55:06 pm by rangerrebew »