Author Topic: Machine Learning and Life-and-Death Decisions on the Battlefield  (Read 137 times)

0 Members and 1 Guest are viewing this topic.

rangerrebew

  • Guest

Machine Learning and Life-and-Death Decisions on the Battlefield
Brad DeWees, Chris Umphres, and Maddy Tung
January 11, 2021
 

In 1946 the New York Times revealed one of World War II’s top secrets — “an amazing machine which applies electronic speeds for the first time to mathematical tasks hitherto too difficult and cumbersome for solution.” One of the machine’s creators offered that its purpose was to “replace, as far as possible, the human brain.” While this early version of a computer did not replace the human brain, it did usher in a new era in which, according to the historian Jill Lepore, “technological change wildly outpaced the human capacity for moral reckoning.”

That era continues with the application of machine learning to questions of command and control. The application of machine learning is in some areas already a reality — the U.S. Air Force, for example, has used it as a “working aircrew member” on a military aircraft, and the U.S. Army is using it to choose the right “shooter” for a target identified by an overhead sensor. The military is making strides toward using machine learning algorithms to direct robotic systems, analyze large sets of data, forecast threats, and shape strategy. Using algorithms in these areas and others offers awesome military opportunities — from saving person-hours in planning to outperforming human pilots in dogfights to using a “multihypothesis semantic engine” to improve our understanding of global events and trends. Yet with the opportunity of machine learning comes ethical risk — the military could surrender life-and-death choice to algorithms, and surrendering choice abdicates one’s status as a moral actor.

https://warontherocks.com/2021/01/machine-learning-and-life-and-death-decisions-on-the-battlefield/