Wednesday, September 30, 2009

The Ethics of Automated Weapons

I regularly follow developments in technology of all sorts, including in the computing, aerospace, and military fields. One thing that's becoming quite clear to me is that applications of automated killing are rapidly proliferating. As someone who knows a thing or two about computers and the software that runs them, I am more than a little concerned.

In the US, virtually all branches of our military are devising, developing, and deploying unmanned machines that are capable of taking lives. These include aircraft, ships, and ground vehicles.



These systems had humble beginnings focused on support functions like reconnaissance and mobility; aircraft with sensors and cameras that could observe targets and vehicles that could carry gear across rough terrain. They were like remote controlled toys on steroids, reducing the risk to our troops and increasing their combat effectiveness.



As the machines became more robust and reliable, their functional envelope was extended to contain offensive capabilities. It is now common for these machines to carry air to ground missiles, bombs, cannons, and machine guns. We regularly hear about unmanned aerial vehicle (UAV) strikes on ground targets in Iraq, Pakistan, and Afghanistan.

Today, each of these vehicles requires direct human supervision. UAVs have pilots that fly via remote control from thousands of miles away. I sometimes wonder about the morality of killing through a video screen and then going home for dinner with the family. Does this lead to a lower threshold for deciding to kill? Does it remove some of the moral repercussions of killing? Will the reduced risk to our citizens make us more apt to select war as a means to further our national interests?

Even more challenging, the future direction is clearly towards autonomous or semi-autonomous killing machines. There are many valid arguments for this strategy. The time delay inherent in remote communication reduces the effectiveness of current remote controlled vehicles; UAV pilots report that there is up to a two second time delay between their control inputs and the actual change in the aircraft's attitude, and the images being displayed back are similarly delayed. This results in a reduced ability to lock on to fast-moving targets and avoid threats that could destroy the UAV. If the UAV could be directed to a target and then allowed to operate on its own, it could make rapid course corrections that would increase its ability to successfully carry out the attack and avoid or escape from threats. This would be a semi-autonomous UAV, because a human operator still designated the target and decided to attack it. Fully-autonomous vehicles would simply be directed to a patrol area and could decide on their own when to engage a target or flee from a threat.

Additionally, autonomous machines could be programmed to coordinate attacks in real time with other machines in the area; for example a fast moving jet carrying bombs working in tandem with a helicopter hovering behind a treeline, and the helicopter popping up just in time to point a laser designator on a target to guide the jet's bombs thus minimizing the risk to both vehicles and increasing the probability of a successful attack.

The notion of autonomous killing machines raises three key concerns:
  1. What if the machine fails to select appropriate targets? In the best case, a valid target may escape and cause more allied casualties later. In the worst case, allies or civilians may be killed. In the realm of asymmetrical warfare, it is almost impossible for humans to make clear choices and avoid mistakes in these situations; my estimation is that even our best computers are not up to this task.
  2. Will this completely eliminate the ethical burden of making war? With today's UAVs, a person must still, ultimately, decide to 'pull the trigger' and take responsibility for the action. If we remove ourselves from that decision loop, will we still take moral responsibility for the actions of the machines we built and deployed? And if we don't, what will that do to the fabric of society?
  3. Who will control these machines? One thing I've learned from history is that, generally speaking, a large enough group of people will enforce moral behavior. An army won't attack a civilian population; some amoral subgroup might try, but the majority will rapidly bring them under control. In a world where a few people could control a large number of these machines, however, what would prevent a potential slaughter? And could an army of these machines be used to control the population at large in any country, even our own?
These are serious concerns, and there are probably many more I haven't considered here. Yet it seems clear that this technological development track will continue because it is a natural evolution of our current capabilities and there are reasonable arguments for it. I'm sure that I would have much stronger feelings about this if I lived in a country where these weapons are likely to be deployed, and can only tremble at the thought of being on the wrong end of a gun controlled by a soulless machine.

No comments: