While the U.S. military does not have weapons completely controlled by artificial intelligence in its inventory, whenever these weapons are deployed, the AI driving these weapons will likely be programmed with a “Judeo-Christian” value system, Lt. Gen. Richard G. Moore Jr., deputy chief of staff for plans and programs in the U.S. Air Force, suggests.
Lethal autonomous weapon systems, or LAWS, explains a March report from the Congressional Research Service, “are a special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and employ an onboard weapon system to engage and destroy the target without manual human control of the system.”
Last Thursday, at an event hosted by the Hudson Institute, Moore was asked when the U.S. would be able to start deploying LAWS and if the military had an ethical duty to keep humans involved even when the technology is ready.
Moore acknowledged that the U.S. was not yet ready to deploy LAWS, but it is an area that Deputy Secretary of Defense Kathleen Hicks is particularly concerned about.
“What will the adversary do? It depends [on] who plays by the rules of warfare and who doesn't. … There are societies that have a very different foundation [than] ours,” he said. “Our society is a Judeo-Christian society, and we have a moral compass. Not everybody does. And there are those that are willing to go for the ends, regardless of what means have to be employed. And we'll have to be ready for that.”
The Congressional Research Service noted that although LAWS aren’t yet in widespread development, “it is believed they would enable military operations in communications degraded or denied environments in which traditional systems may not be able to operate.”
A growing segment of the international community has been calling for a ban or regulation of laws due to ethical concerns. Moore explained that the development of ethical AI will be a significant feature of the Defense Department’s budget in 2024.
“And that takes several forms. The first one is what do we think we're allowed to let AI [do]? The second one is how do we know how the algorithm made decisions? And do we trust it? And the third one is, at what point are we ready to let the algorithm start doing some things on its own that maybe we are or aren't comfortable with?” Moore said.
He noted that Chris Brose's 2020 book, The Kill Chain: Defending America in the Future of High-Tech Warfare, delves into how new technologies are threatening America's military might.
“He talks extensively about whether you would trust a young soldier on the ground that maybe hasn't had sleep in three or four days and hasn't had a good meal or certainly shower,” Moore said.
“This young soldier, that heat, sweat, fatigue, all of that is making a decision about employing lethal force or not, or an algorithm that never gets tired. You might actually think that if you can understand how the algorithm makes decisions and trust it, you might rather have that algorithm that never gets hot and never gets tired, it never gets hungry. You might rather have it making decisions for you,” Moore said.
“But until you have in place the foundations of ethical AI that allow that to happen, you can't get there. So it is a very important discussion. It's one that's being held at the very highest levels of the Department of Defense.”