Author Topic: AN ETHICAL DILEMMA: WEAPONIZATION OF ARTIFICIAL INTELLIGENCE  (Read 85 times)

0 Members and 1 Guest are viewing this topic.

Offline rangerrebew

  • TBR Contributor
  • *****
  • Posts: 166,526
AN ETHICAL DILEMMA: WEAPONIZATION OF ARTIFICIAL INTELLIGENCE
« on: February 02, 2023, 03:20:37 pm »
AN ETHICAL DILEMMA: WEAPONIZATION OF ARTIFICIAL INTELLIGENCE
Articles
Sun, 01/29/2023 - 5:51pm
 

An Ethical Dilemma: Weaponization of Artificial Intelligence

By Justin K. Steinhoff

 

Over the past decade, technological transformations have changed the world we live in, from advances in voice assistant technologies, facial recognition software, cryptocurrency markets, fully autonomous self-driving vehicles, and neural network sensing technologies. Today, voice assistance devices using artificial intelligence (AI) and machine learning (ML) technologies, such as Amazon Echo and Google Home, are a part of more than half of all United States (US) adults' daily lives (Juniper Research, 2017). The advancement of AI is leading a global revolution that is changing how we interact (Department of State, 2022). Equally, the US Department of Defense (DoD) has already invested in transformative technology advancement and AI/ML adoption (Chief Digital and Artificial Intelligence Office [CDAO], 2022a).

Numerous companies collect and utilize customer information to support demands and develop sales strategies and targeted advertising campaigns (Goddard, 2019). Adversely, organizations such as Oracle and Cambridge Analytica rely on vast amounts of information, dubbed 'big data,' to create psychometric profiles designed to influence the population (Goddard, 2019; Porotsky, 2019). Likewise, the US Government utilizes and operationalizes vast amounts of big data, AI, and ML adoption to influence military operations (Department of State, 2022). For example, in June 2022, the DoD CDAO and the US Air Force conducted a joint exercise to evaluate and assess a project called Smart Sensor Brain. The project uses an AI-enabled autonomous unmanned aerial system that can conduct "automated surveillance and reconnaissance functions in contested environments" (CDAO Public Affairs, 2022, para. 3). Using transformative technologies creates moral challenges and numerous ethical dilemmas as the US Government weaponizes AI, ML, and autonomous systems.

The Problem

Weaponizing advanced technologies utilizing AI, ML, and autonomy leads to significant ethical and moral challenges that the US Government needs to address appropriately. In 2017, at the National Governor's Association meeting, the chief executive officer of Tesla, Inc. and SpaceX, Inc., Elon Musk, described what he visualizes as the biggest empirical threat to the US is AI safety (Molina, 2017). Musk pronounced that the government needs to consider AI regulation due to the "fundamental existential risk for human civilization" (Molina, 2017, para. 1). The concept of AI has been present for decades, and as advancements in ML, specifically neural networking systems, new capabilities will continue to challenge the world of possibilities.

https://smallwarsjournal.com/jrnl/art/ethical-dilemma-weaponization-artificial-intelligence
The legitimate powers of government extend to such acts only as are injurious to others. But it does me no injury for my neighbor to say there are twenty gods, or no god. It neither picks my pocket nor breaks my leg.
Thomas Jefferson