Wednesday, September 30, 2009

Recent UAV Incidents and the Human-Computer Problem

Over the weekend, the Association of Computing Machinery (ACM)'s RISKS news digest passed along a report about a United States Air Force (USAF) drone that became unresponsive to operator commands and was accordingly shot down by a manned USAF aircraft before it could cross the Afghan border. It's unfortunate that ACM RISKS chose to package a rather lightweight and sensational report that seemed to insinuate that the drone may have become self aware, and ended with a provocative quotation about the drone being motivated by its feeling of being "sicked by reaping hapless fleshies."

The USAF's official statement on the incident is unsurprisingly terse, and an investigation is still pending. However, there seem to be no serious grounds for supposing anything like self-realization or rebellion (compassionate or otherwise) on the part of the drone, and there are several logical and factual problems with even suggesting this to be the case. Most seriously, there is no good reason to believe that a drone designed and built to be controlled by a remote human operator would have anything more than minimal capabilities of autonomous operation. Giving the machine unnecessary autonomy of any kind would be counterproductive to whole aim of building and operating an aircraft with a remote human pilot. More to the point, its seems excessively whimsical to suppose that such a highly constrained and special-purpose computer-control system could possess the richness and complexity necessarily underlying the spontaneous emergence of such a miraculous and unprecedented machine awareness. As far as becoming sick of "reaping hapless fleshies", I would hope that the human operators would take it upon themselves to conceive such a sensibility, rather than offloading the work of ethics onto a machine.

For the record, I believe that machine intelligence is entirely possible, even probable, and I do even admit the possibility of its spontaneous emergence. What I object to is that suggestion that a special-purpose machine containing the same sort of embedded computer systems common to most modern aircraft and designed to be controlled by a human pilot, albeit a remote one, would suddenly be the first machine to become not only willful, but conscious enough to be compassionate or vengeful.

As is always the case, though, truth is stranger than fiction. This most recent failure of a U.S. military unmanned aerial vehicle (UAV) has some striking features that tell us, surprisingly or unsurprisingly, that we have more difficult problems with ourselves and how we use our machines than we do with the machines themselves. Five years ago, the Federal Aviation Administration (FAA) compiled and analyzed all information available on accidents involving U.S. military UAVs (there are no civilian UAVs in widespread use, that I know of), and found that the MQ-9 Reaper (also known as the MQ-9 Predator B), the UAV most commonly used in current U.S. operations in Afghanistan, suffered a noticeable preponderance of accidents due to human error[1]. In particular, operator difficulties with the poorly designed interface used for the remote control of the Reaper were cited as contributing factors in just under half of all reported accidents. These included an improperly executed attempt to transfer control of the UAV between ground control stations that resulted in the turning off of the aircraft's engines, and another episode wherein a pilot accidentally executed a routine that erased the random-access memory of the control computer while the UAV was in flight. The FAA report cites another source claiming that a sequence of keystrokes used to control the lights on the Predator UAV is almost the same as a sequence that cuts the aircraft's engine. Anyone who has ever played a video game knows how easy it can be to hit the wrong keys and get your computerized proxy metaphorically killed as a result. It would be naive to suppose that such mishaps are impossible just because the proxy is a $53.3 million[2] aircraft and the killing is quite literal.

I should note that the most of the other UAVs reviewed in the FAA report showed a much lower incidence of human error attributed to accidents, and these could usually be localized to a single eccentricity of the particular aircraft that made its operation counter-intuitive. (For instance, having to a turn a knob to the left in order to make the aircraft turn right.) However, this only seems to emphasize how the generally poor quality of the Reaper interface contributes to accidents. The episodes described by the FAA report sound eerily reminiscent of the infamous Therac-25 incidents[3], wherein a minor confusion at the interface between human and machine was all it took for someone to be seriously injured or killed. Perhaps we should worry not so much about the consciousness of our machines as about the consciousness of ourselves.

It's noteworthy that the wayward UAV was destroyed once it stopped responding to commands from its operator. This is a dramatic illustration of a basic technological principle: the thing has form and value only insofar as it serves a human purpose. The destruction of the unresponsive Predator can also be viewed, perhaps, as a tacit admission that the thing is dangerous in the absence of a human controller. Of course, there were sensible and sound reasons for USAF's action: the drone was on course to cross international boundaries, which could be easily and rightly read by others as an act of callous negligence, if not outright aggression. In all likelihood, this particular failure of a Predator was due to a component malfunction and not to any error by a human operator or maintainer. However, the episode itself and the responses it has drawn at large all call attention to a basic concern with the notion of a powerful technological artifact being allowed to drift freely out of human control. I would argue that such concern is a mark of sanity. The ease with such errors can happen, by simple misunderstandings or oversights, should highlight the profound difficulties in navigating the interface between human intentions and the machinery built and deployed to execute them.




[1] Williams, Kevin. "A Summary of Unmanned Aircraft Accident/Incident Data: Human Factors Implications". DOT/FAA/AM-04/24, Office of Aerospace Medicine, Washington DC. (available here)

[2] USAF Fact Sheet on the MQ-9 Reaper (available here)

[3] Leveson, Nancy, Clark Turner. "An Investigation of the Therac-25 Accidents", IEEE Computer, 26(7):18-41. (Also available here)

No comments: