Pentagon advisory board releases principles for ethical use of artificial intelligence in warfare

Pentagon advisory board releases principles for ethical use of artificial intelligence in warfare

Pentagon advisory board releases principles for ethical use of artificial intelligence in warfare

Hoping to prepare for what many see as a coming revolution in weaponry enabled by artificial intelligence ― and convince a skeptical public that it can apply such innovations responsibly ― the U.S. military is taking early steps to define the ethical boundaries for how it will use such systems.

On Thursday, a Pentagon advisory organization called the Defense Innovation Board published a set of ethical principles for how military agencies should design AI-enabled weapons and apply them on the battlefield. The board’s recommendations are not legally binding. It now falls to the Pentagon to determine how and whether to proceed with them.

Lt. Gen. Jack Shanahan, director of the Defense Department’s Joint Artificial Intelligence Center, said he hopes the recommendations will set the standard for the responsible and ethical use of such tools.

“The DIB’s recommendations will help enhance the DOD’s commitment to upholding the highest ethical standards as outlined in the DOD AI strategy, while embracing the U.S. military’s strong history of applying rigorous testing and fielding standards for technology innovations,” Shanahan said in a statement emailed to reporters.

Artificial intelligence algorithms are computer programs that can learn from past data and make choices without the input of a human. Such programs have already proved useful in analyzing the vast quantities of intelligence data that military and intelligence agencies collect, and the commercial business world has found myriad uses for them.

But the prospect of computers making decisions in a combat scenario has been met with skepticism from some corners of the tech world.

In 2017, a group of 116 technology executives asked the United Nations to pursue an all-out ban on autonomous weapons. Google went so far as to ban the use of its AI algorithm in any weapons system, a decision that followed employee complaints over its involvement in a program to analyze drone footage. Other tech companies, such as Microsoft and Amazon, have embraced opportunities to work with the military while arguing for a more nuanced approach to the matter. The Pentagon’s known uses of AI are a far cry from the dystopian visions that have appeared in popular fiction for decades.

The Army has been experimenting with “predictive maintenance” programs, hoping to flag failing vehicle parts before they break down in combat. Defense and intelligence agencies have been using artificial intelligence to analyze drone feeds, hoping to spare Air Force personnel countless hours of staring at video feeds collected by the surveillance aircraft.

Last year, the Defense Department created a Joint Artificial Intelligence Center to coordinate AI-related activities across the services, and unveiled an artificial intelligence strategy focused on speeding up its use of such technology to compete with Russia and China. Thus far, the Defense Department has just been dipping its toes in, analysts say.

“What you see DOD searching for is some early use cases that are relatively easy from a tech standpoint and from a policy and cultural standpoint,” said Paul Scharre, a former Army Ranger and Pentagon official who studies the issue at the Center for New American Security, a think tank. “They’re looking for the ability to demonstrate clear value,” he said. The AI principles released Thursday were light on specifics, setting few of the hard-and-fast boundaries that AI skeptics might have hoped for.

The recommendations for the Defense Department pertained mostly to broadly defined goals such as “formalizing these principles” or “cultivating the field of AI engineering.” Other recommendations included setting up a steering committee or a set of workforce training programs.

While short on specifics, the document did establish a set of high level ethical goals the department should strive for in its design of AI-enabled systems.

It clarified that AI systems should first and foremost be “responsible” and always under the full control of humans. The document specified that AI systems should be “equitable,” recognizing that some AI systems have already been shown to express racial biases.

The document asserts that they should also be “traceable,” enabling their design and use to be audited by outside observers, and “reliable” enough to function as intended. And the systems should be “governable” so they can be shut off when found to be acting inappropriately.

This article originally appeared on washingtonpost.com To read the full article and see the images, click here.

Nastel Technologies uses machine learning to detect anomalies, behavior and sentiment, accelerate decisions, satisfy customers, innovate continuously.  To answer business-centric questions and provide actionable guidance for decision-makers, Nastel’s AutoPilot® for Analytics fuses:

  • Advanced predictive anomaly detection, Bayesian Classification and other machine learning algorithms
  • Raw information handling and analytics speed
  • End-to-end business transaction tracking that spans technologies, tiers, and organizations
  • Intuitive, easy-to-use data visualizations and dashboards