Why we need ethical military AI, and how to build it
Artificial intelligence (AI) is rapidly becoming a part of just about every aspect of life in the twenty-first century, including warfare. And as it becomes ever more advanced and ubiquitous, a central question becomes increasingly vital: can AI be made ethical or even virtuous?
A virtual conference hosted by Catholic University and supported by Leidos in April brought together leading philosophers, military thinkers, and technologists to answer that question. As Ron Keesing, vice president for artificial intelligence at Leidos, said of himself and his colleagues, “We're passionate about this issue of virtuous AI because we believe it's crucial to our nation and the world."
Why we need ethical AI
Keesing also cited the need for the U.S. and its allies to stay ahead of the ethical AI technology curve. “If we don't provide clear and decisive leadership around AI ethics, we leave a vacuum that would be filled by those who do not share our values and our commitments to human freedom, dignity, and autonomy," he said.
At the same time, ethical AI is crucial on a purely tactical level, said Bruce Jette, a former Army officer, CEO of Innvistra LLC, and former Assistant Secretary of the Army for Acquisition, Logistics and Technology. That's because humans can't react quickly enough to compete with automation in warfare. He cited tank battles as an example. “Sometimes you see each other, and it becomes a quick draw," he said. “In that case, I can fire my weapon more precisely with an automated system than I can with a gunner in the gunner's seat."
Yet, an AI-enabled tank that fires indiscriminately at all promising-looking targets could do more harm than good. Weapons systems that depend on AI need ethics built in; the AI also needs to be subject to continuous human governance so a user can help guide its behavior based on the situation. As Jette explained, in an urban environment with civilian as well as enemy vehicles, a tank crew may need AI to verify targets with 98% accuracy. “Whereas, if I'm out in the desert, maybe I could drop that down to 85% probability."
The first step in building such systems, Jette said, is to develop policies laying out when 85% accuracy is acceptable vs. 98%. “And then there's one layer more," he added. “Who says when 85% is good enough? What parameters do they go by?" Jette said this type of problem amounts to a rules-of-engagement issue — one that developers must redesign into automated systems while allowing human control.
Ongoing research presented by the conference participants points the way forward.
Engaging threats without harming civilians
Panelists Bartlett Russell, a program manager at DARPA, and Mary Magee Quinn, human systems chief scientist at Leidos, both work on projects trying to build ethics into military AI systems.
Russell outlined one of her projects, and Quinn, who helps develop AI systems for Air Force autonomous vehicles, offered best practices for development.
Russell's Urban Reconnaissance through Supervised Autonomy (URSA) program seeks to automatically separate threats from civilians in urban environments. “Urban environments remain among the most lethal for our warfighters to work within," she said of the problem she's trying to solve. “The current ratio is five to one for a team of Marines to neutralize a single shooter in an urban environment."
In an effort to bring that ratio down, URSA deploys air and ground robots to find threats among innocent bystanders. “Because we're interacting with civilian populations, legal, moral, ethical [LME] considerations have to be at the forefront," she said. A legal, moral, and ethical working group sat with engineers to develop a system with ethical safeguards built in, such as not interacting with children, not bothering the same people too many times, and making places of worship and other sensitive locations off-limits.
They also built what Russell called a base system, one designed without ethical safeguards. “And that's where things got really exciting," she said. Rather than holding back the AI, incorporating LME safeguards led to systems that performed better in the field compared to the base system. “Oftentimes, the most technically proficient solution is also the one in line with LME principles."
Best practices for ethical AI
Among other challenges, building ethics into machines requires defining what is ethical in a given situation. Speakers throughout the day agreed that's no easy task and one that is ongoing. Even so, Quinn outlined best practices she believes will help technologists get closer to ethical AI. She suggested four guiding principles.
Design for ethics from the beginning
“We can't build code and then jam this stuff in at the end," Quinn said of ethical behavior. “It needs to be baked in and tested throughout the entire process of development." That means all involved must agree on what ethical standards they need to apply.
Test for ethics
“We need to test our systems based on ethical definitions," Quinn said. So, in addition to testing whether an autonomous aircraft can fly, do what it's told, and land automatically, it also needs to demonstrate ethical behavior in tests. “Give it test cards that really question its ability to make ethical choices," Quinn advised.
Foster transparency
Quinn believes effective AI systems must communicate to users how they arrive at their decisions. “Why is that important to the warfighter?" Quinn asks. “Because that is the way they determine their trust." To foster what she and her colleagues call calibrated trust, Quinn says developers must include users in the development and testing process.
Build in adaptability
Echoing Jette's tank example, Quinn stressed the need to enable users to adjust AI behavior according to circumstances. “There needs to be a way during mission planning to alter the software so that the AI understands the rules of engagement for this particular battle or mission," she said. “That needs to be implemented as we're developing the AI."
Looking to the future
Quinn acknowledged that fully autonomous AI doesn't yet exist in military combat. But she expressed the belief that it's coming and that building ethics into AI systems now will enhance ethics in humans and future machines alike.
“AI has the capability to very quickly collect and analyze data from a variety of sensors to arrive at more effective courses of action that will reduce human error," she said. And that can only be a good thing when lives hang in the balance.
Embracing the goal of virtuous AI
One recurring theme of the conference was that by building AI that focuses on ethical behavior, we may be able to drive virtuous outcomes. In fact, setting the goal as virtuous AI may even cause better solutions to be built. As Keesing noted, “Current conversations around AI ethics tend to focus around a minimum standard. It is being defined as a bar to cross, with systems either meeting an ethical AI threshold or falling short. Defining our goal as virtuous AI drives us to set our sights higher, developing AI that doesn’t just avoid turning into SkyNet, but improves continuously in a never-ending bid to achieve more virtuous outcomes in the world.”