The future of war lies in part with what the military calls "autonomous weapons systems" (AWS), sophisticated computerized devices which, as defined by the U.S. Department of Defense, "once activated, can select and engage targets without further intervention by a human operator."
Whether it's a good idea or a bad one is debatable, but it isn't a question of if, but how soon autonomous, artificially intelligent machines will fight side by side with human soldiers on the battlefield. United States Army General Robert W. Cone (now deceased) predicted in 2014 that as many as one-quarter of all U.S. combat soldiers might be replaced by drones and robots within the next 30 years.
In the U.S., both the Army and Marine Corps are already testing remote-controlled devices like the Modular Advanced Armed Robotic System (MAARS), an unmanned ground vehicle (UGV) designed primarily for reconnaissance that can also be equipped with a grenade launcher and a machine gun:
Even though it is unmanned and operable from a distance of up to a kilometer away, MAARS falls well short of being an autonomous weapons system. In robotics terminology, it's a "human-in-the-loop" system, meaning it requires interaction with a human operator to perform its functions. Moving up the ladder of autonomy, there are "human-on-the-loop" systems, which are capable of acquiring and engaging targets on their own but can be overridden by human operators; and, finally, fully autonomous "human-out-of-the-loop" systems which, once activated, go about identifying and launching attacks against enemy targets with no human oversight at all.
The latter are known as lethal autonomous weapons systems (LAWS for short, or more pithily, "killer robots," as critics have dubbed them). Though they may conjure up futuristic, dystopian images redolent of The Terminator (the Arnold Schwarzenegger film about an armed super-robot from the future) or Robopocalypse (Daniel Wilson's 2011 science fiction novel about AI weapons turning on their creators), the dangers they pose are firmly rooted in reality.
Are LAWS already in use? No. Yes. Maybe. There are weapons like Samsung's SGR-A1 sentry gun — currently said to be deployed along the demilitarized zone between South and North Korea — which are configured to require a human in the loop but are capable, allegedly, of engaging an enemy autonomously (though the SGR-A1's developers claim it cannot):
In any case, autonomous weapons are surely under development by many nations, a reality so concerning to non-military robotics and artificial intelligence experts that many signed an open letter in 2015 urging a pre-emptive international ban on the weapons. One fear expressed by the signatories (among whom were such sci-tech luminaries as Stephen Hawking, Elon Musk, and Steve Wozniak) is that autonomous weapons systems are much closer to hand than the military cares to acknowledge:
Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.
Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. ... It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.
The letter was released by the Future of Life Institute, a non-profit advocacy group out to ensure that technological advances are used for the benefit, not the detriment, of humanity. Another group sounding the call for a halt to the development of autonomous weapons is the Campaign to Stop Killer Robots, co-founded by Human Rights Watch in 2012.
"We came down on the side of a pre-emptive ban as being the best and most lasting solution to the challenges raised by these weapons," the campaign's global coordinator, Mary Wareham, told us. Members of the group will participate later this year in a meeting of government experts called by the United Nations Convention on Conventional Weapons (CCW), from which a new international protocol could emerge addressing the issue of — perhaps ultimately prohibiting — lethal autonomous weapons systems. Wareham says she saw a similar meeting of governmental experts in 1995 result in a ban on the use of blinding laser weapons (a ban which has held to this day).
Nineteen nations (so far) have called for an outright pre-emptive ban on LAWS. Dozens more say they think it's important to retain human control over them, Wareham says, which her group interprets as a positive indication they'll eventually sign onto the ban.
The U.S. isn't among the nineteen nations calling for a ban, though it is one of the few countries whose military has adopted an official policy governing the development and use of autonomous weapons. Department of Defense Directive 3000.09, "Autonomy in Weapon Systems," issued in 2012, is largely centered around this basic principle:
Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.
What constitutes an "appropriate level" of human judgment is never defined, but the written policy nevertheless places the United States within the contingent of nations that favor retaining human control over weapons systems. The directive is five years old as of 2017, however, which means that it must be renewed and/or updated this year, or expire.
Current Deputy Secretary of Defense Robert Work (who also held that position under President Obama) is a firm believer in AI on the battlefield, and also takes a strong stand against excluding humans from the loop. He has compared innovations in artificial intelligence to technological achievements like the rifle, the telegraph, and railroads, "things that were changing society and they ultimately changed the way war occurred," he said in a 2016 speech in Brussels. "The same thing is going to happen with AI and autonomy."
In the same speech, Work dismissed talk of "killer robots" and comparisons to The Terminator. "Humans, in the United States' conception, will always be the ones who make decisions on lethal force, period," he said. "End of story."
One would think such a view would be fully compatible with a ban on lethal autonomous weapons systems, but at least one other member of the Trump administration, Steven Groves, Deputy Chief of Staff of the U.S. Ambassador to the United Nations, has said otherwise. "The United States should oppose attempts at the CCW to ban LAWS," Groves wrote in 2015, "and should continue to develop LAWS in a responsible manner in order to keep U.S. armed forces at the leading edge of military technology."
Even so, Mary Wareham says, the United States has generally been more supportive of formal discussions about the future of autonomous weapons than other countries capable of producing them — for example, Russia, which to date has shown a greater inclination to view such efforts as "premature." China, on the other hand, moved the process a step forward in 2016 with the publication of a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to state the necessity of developing new international law in the matter.
The first-ever meeting of the United Nations Convention of Conventional Weapons Group of Governmental Experts on Lethal Autonomous Weapons Systems was scheduled to take place on 21 August 2017 but was postponed for administrative reasons. A new statement released by Elon Musk and fellow signatories that same day entreated U.N. participants to "double their efforts" to prevent an artificial intelligence arms race when they finally meet in November:
Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.