Author Topic: Why the Military Can’t Trust AI  (Read 194 times)

0 Members and 1 Guest are viewing this topic.

Offline rangerrebew

  • TBR Contributor
  • *****
  • Posts: 166,231
Why the Military Can’t Trust AI
« on: April 29, 2024, 04:36:00 pm »
Why the Military Can’t Trust AI
Large Language Models Can Make Bad Decisions—and Could Trigger Nuclear War
By Max Lamparth and Jacquelyn Schneider
April 29, 2024
 
https://www.foreignaffairs.com/united-states/why-military-cant-trust-ai
 
In 2022, OpenAI unveiled ChatGPT, a chatbot that uses large language models to mimic human conversations and to answer users’ questions. The chatbot’s extraordinary abilities sparked a debate about how LLMs might be used to perform other tasks—including fighting a war. Although for some, including the Global Legal Action Network, LLMs and other generative AI technologies hold the promise of more discriminate and therefore ethical uses of force, others, such as advisers from the International Committee of the Red Cross, have warned that these technologies could remove human decision-making from the most vital questions of life and death.

The U.S. Department of Defense is now seriously investigating what LLMs can do for the military. In the spring of 2022, the DOD established the Chief Digital and Artificial Intelligence Office to explore how artificial intelligence can help the armed forces. In November 2023, the Defense Department released its strategy for adopting AI technologies. It optimistically reported that “the latest advancements in data, analytics, and AI technologies enable leaders to make better decisions faster, from the boardroom to the battlefield.” Accordingly, AI-enabled technologies are now being used. U.S. troops, for example, have had AI-enabled systems select Houthi targets in the Middle East.

Both the U.S. Marine Corps and the U.S. Air Force are experimenting with LLMs, using them for war games, military planning, and basic administrative tasks. Palantir, a company that develops information technology for the DOD, has created a product that uses LLMs to manage military operations. Meanwhile, the DOD has formed a new task force to explore the use of generative AI, including LLMs, within the U.S. military.

https://www.foreignaffairs.com/united-states/why-military-cant-trust-ai
The legitimate powers of government extend to such acts only as are injurious to others. But it does me no injury for my neighbor to say there are twenty gods, or no god. It neither picks my pocket nor breaks my leg.
Thomas Jefferson

Online DB

  • Hero Member
  • *****
  • Posts: 13,488
Re: Why the Military Can’t Trust AI
« Reply #1 on: April 29, 2024, 04:44:28 pm »
AI can be corrupted with little trace.

Offline DefiantMassRINO

  • Hero Member
  • *****
  • Posts: 10,345
  • Gender: Male
Re: Why the Military Can’t Trust AI
« Reply #2 on: April 29, 2024, 05:04:13 pm »
AI is one tool and one source, which needs corroboration from other tools and sources.

Self-Anointed Deplorable Expert Chowderhead Pundit
I reserve my God-given rights to be wrong and to be stupid at all times.

"If at first you don’t succeed, destroy all evidence that you tried." - Steven Wright

Comrades, I swear on Trump's soul that I am not working from a CIA troll farm in Kiev.

Online DB

  • Hero Member
  • *****
  • Posts: 13,488
Re: Why the Military Can’t Trust AI
« Reply #3 on: April 29, 2024, 05:28:16 pm »
AI is one tool and one source, which needs corroboration from other tools and sources.



AI can be so muddled in its operation that no one can check it. Especially as it gains control of the external checks and balances.