The US Navy's Report
Military’s killer robots must learn warrior code
Military’s killer robots must learn warrior code
Autonomous military robots that will fight future wars must be programmed to live by a strict warrior code or the world risks untold atrocities at their steely hands.
The stark warning – which includes discussion of a Terminator-style scenario in which robots turn on their human masters – is issued in a hefty report funded by and prepared for the US Navy’s high-tech and secretive Office of Naval Research .
The report, the first serious work of its kind on military robot ethics, envisages a fast-approaching era where robots are smart enough to make battlefield decisions that are at present the preserve of humans. Eventually, it notes, robots could come to display significant cognitive advantages over Homo sapiens soldiers.
A simple ethical code along the lines of the “Three Laws of Robotics” postulated in 1950 by Isaac Asimov, the science fiction writer, will not be sufficient to ensure the ethical behaviour of autonomous military machines.
“We are going to need a code,” Dr Lin said. “These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code.”
Isaac Asimov’s three laws of robotics
1 A robot may not injure a human being or, through inaction, allow a human being to come to harm
2 A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law
3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Introduced in his 1942 short story Runaround