Kill Decision

The Dangers of Developing Lethal Autonomous Drones

  • Graphic Jayde Norström

The 21st century with flying cars and android servants envisioned in the likes of The Jetsons and Back to the Future have yet to become a reality, but flying robots do exist—just not in the forms we expected.

Drones and unmanned vehicles have been on the rise in the past decade, from the invention of goofy applications like the “Tacocopter,” the taco-delivering helicopter drone in San Francisco, to more sinister uses, like military combat drones delivering lethal payloads on assassin missions in the Middle East.

But all of these machines still have a human aspect to them—the person on the other end controlling the drone.

Peter Singer, author of Wired for War: The Robotics Revolution and Conflict in the 21st Century, recently visited Concordia and spoke about the use of drones in modern warfare, such as the MQ-1 Predator and MQ-9 Reaper drones, and the devastating effects they can have both on innocent civilians in the battlefield and on the soldiers who fire their missiles from back home in North America.

Drones have rendered human pilots in the cockpit unnecessary, and the next step in the equation has already arrived—fully autonomous drones requiring no human to control them at all.

The technology has its positives—Google recently unveiled its prototype driverless car, powered by its trademark software “Google Chauffeur,” which is still in beta testing.

But the military is applying this technology as well.

In May of this year, the U.S. Navy tested its X-47B drone as part of its Unmanned Combat Aerial System. The drone took off from the USS George H. W. Bush, followed a flight path, and touched down again without incident, with no humans actively controlling it from the ground.

According to American monthly magazine Popular Science, the X-47B has two weapons bays, but no weapons equipped—yet.

How long before the “kill decision,” the critical choice to fire and take a life, is shifted from humans to software? The technology as of yet does not exist—or if it does, is still being kept under wraps—but there are many indicators to suggest it will in the near future.

Daniel Suarez, an IT specialist turned cyber-thriller author, gave a TED Talk in June on the possibility of robots making their own kill decisions, citing the automated sentry guns in the Demilitarized Zone between North and South Korea as a precursor to fully autonomous weapons.

“These machines are capable of automatically identifying a human target and firing on it […] at a distance of over a kilometre,” Suarez told the crowd.

But these Terminator-esque sentry guards are still controlled by a human—even though it’s not “a technological requirement,” Suarez said. “It’s a choice.”

But that choice may well soon be in the hands of the machine itself.

Remotely piloted drones are susceptible to human error, hacking or electromagnetic jamming—drawbacks that “smarter,” more autonomous drones could alleviate, Suarez explained. Such drones could react to new circumstances better than their human-controlled counterparts, block external radio signals by hackers, and potentially achieve mission objectives more effectively because of it.

Suarez pointed to a situation in December 2011, when a CIA stealth drone crashed on the eastern border of Iran and fell into the Iranian armed forces’ possession.

With the crash, Iranian armed forces gained significant intelligence about America’s drone technology, and there was nothing the U.S. could do—but upgrading the drone to autonomous and giving it suicide capabilities could possibly have prevented it.

This ties into the problem of cyber-espionage—top-secret schematics extracted from military computers and sold to the highest bidder in parts unknown. This may be the greatest incentive not to invest in and deploy autonomous weapons—to avoid a black market littered with deadly, anonymous drones without national loyalties or anyone else to answer to.

“It is very likely that a successful drone design [could] be knocked off in contract factories and proliferate in the grey market,” said Suarez. “And in that situation, sifting through the wreckage of a suicide drone attack, it will be very difficult to say who sent that weapon.

“This raises the very real possibility of anonymous war. […] It could create a landscape of rival warlords,” he continued, citing criminal organizations and even “powerful individuals” being able to challenge nation-states with the power of autonomous killer drones.

Indeed, the prospect of creating and deploying weapons that can fire and indiscriminately kill on their own could open a Pandora’s Box, as other nations would scramble to keep up and deploy their own drones, and private interests could utilize the technology to their own benefit as well.

It’s a dark portrait of a Matrix-like future, where anonymous killer machines roam the skies—much more frightening than the current reality where such machines rain death in countries like Pakistan, Somalia and Yemen, where drone strike counts already reach the hundreds, according to a report by NYU and Stanford, “Living Under Drones.”

Before It’s Too Late

The possibility of lethal autonomous weapons isn’t going unnoticed, however—groups have been actively resisting the measures and calling out governments on the world stage.

International organization Human Rights Watch released last year a 50-page report titled “Losing Humanity: The Case Against Killer Robots,” in conjunction with the Harvard Law School International Human Rights Clinic, calling for a preemptive international ban on the development, production and use of autonomous weapons.

“Giving machines the power to decide who lives and dies on the battlefield would take technology too far. Human control of robotic warfare is essential to minimizing civilian deaths and injuries,” said Steve Goose, director of the Human Rights Watch’s arms division, in a press release by the organization on the report.

“It is essential to stop the development of killer robots before they show up in national arsenals. As countries become more invested in this technology, it will become harder to persuade them to give it up,” he added.

Suarez believes establishing international treaties on robotics warfare, similar to present treaties on chemical and nuclear warfare, would prevent the new potential weapons from spiraling out of control.

“We need an international legal framework for robotic weapons, and we need it now, before there’s a devastating attack or terrorist incident that causes nations of the world to rush to adopt these weapons before thinking through the consequences,” he said.

Suarez also said integrating autonomous drones into society and fully regulating the new industry is essential to keep autonomous technology safe and prevent it from moving in the wrong direction.

“I think there are tons of great uses for unarmed civilian drones: environmental monitoring, search and rescue, logistics,” he said.

Suarez called for cryptographically signed IDs to accompany each drone produced, similar to licence plates on cars or tail numbers on planes, and for information on drones’ movements in public spaces be available to all citizens through something simple like a smart phone app.

It seems autonomous drones are inevitable in an increasingly automated world—the future of flying robots is here after all. But the key is to channel such drones’ uses and applications for the betterment of society, and never on the battlefield.

The kill decision should never be in the hands of a machine.

By commenting on this page you agree to the terms of our Comments Policy.