Will You Be Murdered By a Robot?

Frightening but never fear-mongering, the information supplied by the authors of The Future of Violence posits a tomorrow full of techno-threats demanding discerning vigilance.

03.23.15 9:15 AM ET

With a bit of technical knowledge and a good imagination, any malevolent person may soon be able to eradicate the human race. This is a mildly exaggerated version of a fundamental claim in The Future of Violence: Robots and Germs, Hackers and Drones—Confronting a New Age of Threat, an alarming and informative new book by Benjamin Wittes and Gabriela Blum.

By combining the elements of the subtitle in sinister ways, Wittes and Blum conjure a number of nightmarish scenarios: a drone hovers above a packed sports stadium and sprays invisible anthrax spores into air breathed by tens of thousands, a miniature robotic drone that looks exactly like a spider assassinates a businessman as he showers, a malign molecular biology graduate student modifies the smallpox virus to enhance its lethality and overcome vaccinations.

Of course with a bit of technical knowledge and a good imagination, any thoughtful person can already eradicate the human race in all manner of weirdly engrossing hypotheticals. In fact some people, like the philosophers at Oxford’s Future of Humanity Institute, seem to make a nice living by contemplating scenarios of mass death. But Wittes and Blum are not professional prophets of doom. Wittes is a senior fellow at the Brookings Institute, and Blum teaches at Harvard Law School.

Their book doesn’t aim to convince us that terrifying but seemingly outlandish scenarios are in fact imminent. They start from the premise that the terrifying scenarios are not only possible, they’re almost certainly inevitable in some form. The essential task, then, is not to sketch in baroque detail the contours of particular horrific hypotheticals, but to develop a viable set of public and private tools to decrease the likelihood and diminish the severity of a large-scale catastrophe.

Before considering their solutions, though, it’s important to see at least some evidence that the threats they describe are in fact plausible. The major menaces fall into three broad categories that correspond to major areas of technological development: robotics, biotechnology, and networked computing. Each field is ambivalent, a Janus-faced force that can be turned to good or harm. Drones drop essential medicine in remote regions and catch corporations doing things like releasing pig blood from a slaughterhouse into Texas waterways. Then again, they are also ideal devices for unwarranted surveillance and remote-control murder of civilians in foreign countries.

The same stark contrasts characterize biotechnology and computer networks. Depending on the examples you select, research in biotechnology promises to cure deadly diseases or threatens to synthesize lethal and infectious pathogens specifically designed to exploit the human genome’s weaknesses. Networked computers, meanwhile, are either emancipating purveyors of knowledge and multipliers of ingenuity or massively vulnerable systems on which we depend for everything from electricity to a functioning economy.

Wittes and Blum give many specific examples from each domain. In 2011, scientists researching ways of modifying bird flu to make it more contagious among humans clashed over whether publishing their research would enable bioterrorism or promote the discovery of a cure. Both sides made reasonable points. A natural mutation might create a highly contagious form of the virus anyway, some argued, so synthesizing such a strain was only a sensible precaution that would enable research. But in the wrong hands the information could be a recipe for a global pandemic.

Weapons of mass destruction are typically associated with nation-states rather than rogue individuals. And for most of the 20th century, the technical difficulty and expense of constructing a nuclear bomb placed the endeavor far beyond the capabilities of garage tinkerers and even many countries. But certain 21st-century weapons—imagine some combination of germs, drones, and hacks that target infrastructure—have the potential to inflict mass casualties yet require only moderate investment and technical knowledge that is often freely available in the public domain. States can no longer afford to views only other states as security threats; smaller groups and even individuals are now potential destroyers of nations. Wittes and Blum call this a world of many-to-many threats, and it’s one in which “You can be attacked from anywhere—and by nearly anyone.”

Pundits and policymakers often insist that increased government surveillance is the price we must pay for security in such a world of diffuse and omnipresent threats. To stop someone from building a destructive computer or biological virus, the argument goes, it’s important to grant the government some leeway to snoop into our private affairs. Wittes and Blum question this conventional thinking in all sorts of interesting and persuasive ways.

Most formulations of the security-and-privacy debate presume a metaphor of balance: as one good increases, the other necessarily decreases. While this is true in certain instances like airport security screenings, the relationship between security and privacy is not always inverse. Increasing privacy doesn’t automatically threaten security, nor does increasing security invariably erode privacy. Allowing the U.S. Postal Service to screen letters for anthrax spores, for instance, enhances safety without diminishing privacy in any meaningful sense. Protecting online data from identity thieves increases both privacy and security. An extreme case—a total absence of any government supervision of civilian life—would not necessarily promote privacy. As failed states and lawless areas show, people living without any sort of effective government are incredibly vulnerable to all sorts of abuses and privacy violations by criminals, extremists, and opportunists.

It’s also important to distinguish semantic issues from substantive ones. Airport security surveillance sounds ominous and Orwellian, while airport security screening sounds reasonable and routine. Such linguistic framing tends to obscure an important truth: The exercise of some degree of government power, given appropriate oversights and non-discriminatory practices, doesn’t inevitably compromise liberty. In many cases, it supplies the security that is a precondition of liberty.

States, however, often need private assistance in securing large and vulnerable public spaces and networks. The American government authorized private attacks on British vessels during the War of 1812 and even paid a bounty to any civilians who managed to destroy a British warship. In the early days of the American railroads, private security hired by rail companies often provided the primary protection against bandits and robbers. Wittes and Blum argue that digital frontiers are bit like the coastline of America in the early 19th century: impossibly long and difficult for single entity to protect from all potential threats. The flourishing of private-sector digital security services makes more sense in this historical context: Governments have a long tradition of deputizing civilians to help protect especially exposed frontiers. Of course this doesn’t mean the practice is without risks, but regulation and oversight can at least mitigate them.

The Future of Violence is a frightening book, but it’s not an exercise in fear-mongering. Rather than arousing fear in order to advocate some dogmatic ideological agenda, Wittes and Blum offer a good example of a productive response to the world’s multiplying horrors: thoughtful and realistic analysis of potential solutions.