It's not about the technology, it's about us.
That’s the obvious yet elusive truth that has been elbowing its way into the popular consciousness since 2010: there is no digital “world,” no “first nation in cyberspace” lurking behind our computer screens. Instead, what we see and do online is a part of what we do, period. The Internet is neither a magical kingdom nor a pirate nation of free speech and perfect artistic self-expression circling the earth beyond the reach of national laws. Nor is it an alien intrusion flooding a fragile human reality with orcs, kinky sex, and email scams from Nigerian bankers. The Internet is us. Once we accept this, it follows that we have a lot of work to do.
How did we ever come to doubt that our technology is an extension of ourselves? In part, we needed a new dream, because the old ones had run dry. In the 1970s, for example, NASA commissioned a series of amazing images of space colonization to fire the public imagination about the Apollo Program, showing vast cylinders with diamond glass windows hung in orbit over our blue-green world, their interiors speckled with a suburban paradise of classic American Dream houses. This was the High Frontier, a continuation to the stars of the westward drift that created America: a perfect pressure valve implying boundless possibility, harking to the myths of the abundant past and the aspirational future at the same time. But by the ’80s, that dream was over. We weren’t going back to the moon, and orbital living had gone from “near future” to “not in my lifetime.”
Then in 1982 William Gibson wrote “Burning Chrome” and the original Tron hit cinema screens. Graphical user interfaces—the ancestors of today’s Mac OS and Windows—replaced the simple text instructions of the command line interface. Abruptly there was a new realm, glimpsed through a computer monitor and tantalizingly within reach. By the ’90s it was official: cyberspace was a place, and everyone could visit. You could even set up a home, made at first out of text but subsequently out of pictures, and then in full-fledged (if poorly rendered) virtual environments. The freewheeling share-and-share-alike culture of the early MIT engineers spread out into a simple online culture where it was hard if not impossible to exchange money for goods or services, and physical consequences were all but unknown. The growing Internet was imaginative, argumentative, academically playful, and lawless.
As that environment became commercialized in the mid-1990s, online communities went through a kind of fast-forward version of the evolution of an up-and-coming big-city neighborhood. Lawsuits became more common as the ethos of a microcommunity (which, by virtue of its small population, had no obvious direct economic effects) spread out into the wider consumer culture. The copyright industries—especially music, TV, and music—took a hard line with copyright infringement.
And then in 2010, the iPad arrived: a computer we could at last touch. Like a curious aquarium fish, digital objects followed our fingers across the glass, making them responsive to a far more instinctive, immediate perception than ever before.
We don’t get a free pass on bad behavior simply because it takes place in a digital environment or is associated with technology.
“You won’t understand,” reviewers wrote, “until you hold one.” The trend continues: a recent project at the Microsoft Applied Sciences Group features a canted transparent desktop that creates an illusion of 3-D space while the user “reaches” around and into the area. Increasingly sophisticated haptic hardware can be used to give digital objects surfaces and edges we can feel. Beyond that, researchers looking to create prostheses that function more and more like natural limbs seek to wire their technologies straight into the brain. Data just aren’t foreign objects anymore, experienced cognitively and read from a screen, manipulated by command and by complex tools like keyboards. On an intuitive level it is now something increasingly physical and therefore human. The digital world is just an aspect of the real one. Already, research reveals that our brains have started to assume the likely availability of an Internet connection when looking for information; the more we can touch it, the closer and more naturalistic that relationship will become.
Which means several quite interesting things. The most important, perhaps, is that we really have to consider what happens to information when we let it loose in the world. The discussion of digital privacy has been around for quite some time, of course, and governments have asserted, and in many cases established, far more intrusive powers over digital media than they have over comparable analog communications. Broadly this has been done by refusing to acknowledge the equivalence of the rising digital media to the preexisting forms—something that becomes untenable as we start to grasp that the distinction is spurious. The Megaupload case has recently thrown up a typical example: the FBI exported data from New Zealand in what the defense is claiming amounts to a breach of New Zealand law. The government’s response is that the law applies only to physical objects. Where the rubber meets the road, though, may be less obvious: the serious issues of digital privacy may arise from choice architecture and behavioral economics and the techniques that are euphemistically called “nudging.”
What’s nudging? A way of getting people to act how you want them to. Consider organ donation: there are never enough donor organs, most probably because people are unwilling to contemplate the possibility of their own sudden death and therefore do not take the decision to obtain and carry organ-donor cards. So why not make the issue opt-out? If you die and you’re not carrying a “Do not donate” card, your liver goes to that teen violin prodigy dying in room 8. Most people wouldn’t opt out for exactly the same reason they don’t opt in—it’s easier to go with the default option. As a result, the supply of donor organs would be massively improved—an ideal outcome. A reframing of a difficult choice has resulted in people making the decision they ought to, without actually forcing them through the difficult process of contemplating mortality and morality. Right? Well, yes and no. They haven’t actually made a choice. The choice has been recast to push them—nudge them—in a direction someone in authority believes is the better one.
It’s not the most democratic process—in fact, it reeks of paternalism. The worse part, though, is that choice is a skill, and like any other skill it atrophies if you don’t use it. The tools of behavioral economics can, with a generous mix of data, be built into maps of action and choice that can in turn be used to build more and more effective nudges. A world of unbalanced pseudochoice could create a consumer population and an electorate lacking the skills necessary to make decisions. The wisdom of crowds can be corrupted very easily. It doesn’t work well if people vote in teams, for example, which is why national parliaments are so wretchedly disappointing; by the same token, it suffers if the individuals making choices don’t give any serious thought to their decisions.
All of which means that personal information is one of the most important things we possess. Facebook may have been overvalued at $104 billion, but at around $100 per user, the data have actually been underpriced—not in terms of profit to commerce, but likely cost to society. The “free to user” model favored by Facebook and Google is part of a discussion not just about personal privacy, but citizenship, and it’s a discussion we all have to give some time to. (For what it’s worth, it’s also at the heart of Google’s internal dilemma: the company’s mission is to make information available and useful, on the understanding that information is empowering. Its revenue stream, on the other hand, depends on supplying the kind of data that ultimately can become massively disempowering. Untangle that if you can.)
The realization that the digital world is not disconnected from the mundane brings these issues into sharp relief, along with a host of others, such as the chain of cause and consequence that connects our technological devices with appalling violence in the Democratic Republic of the Congo and the Middle East or with poor working conditions in China and southeast Asia. We don’t get a free pass on bad behavior simply because it takes place in a digital environment or is associated with technology.
We have a lot to do: if the Internet is not to be a playful and idealized Other Place, then the aspirations toward free speech and justice that the early online homesteaders sought to realize on their frontier must be produced in the world we have, not a virtual surrogate. Aspirational virtuality is all very well, but we know culturally and individually that a society in which the vast majority live hard lives, and supplement their existence with fantasies, is not a fair or a good one.
Fortunately, digital technology also provides us with the opportunity to respond—if we take it. Research shows that using computers—especially for playing games, but also for reading Web pages—requires that we practice decision making. We can build on that, take pains to engage with tough choices rather than favoring the pablums we are often offered, and learn the basic tricks of behavioral economics to know when we’re being played. For example: know that when you see Cornish lobster or white truffle at an astronomical price on a menu in New York, the partial function of that dish is to make everything else seem cheaper. Know also that restating a question in a language you speak less than perfectly forces you to frame the issue more rationally. Cultivate a habit of scrutinizing your choices for automatic, unconsidered responses. Use the Internet to seek, receive, and impart information you have assessed as useful: your personal opinion and expertise contribute to the biggest smart crowd there is. Don’t be afraid to change your mind if events prove you wrong. The choices we make today determine what happens, and “wait and see”—as the content industries have discovered over the last decade—is not a neutral position. It’s very nearly an admission of defeat.
The technological revolution has put us in control of our options, and now we have to learn how to exercise them, or they will once more be expropriated. The massive rejection of the Stop Online Piracy Act earlier this year showed the strength of people power expressed and organized through the Internet—but that was negative rather than innovative. We are in a position now where we can look forward and ask what sort of society we want and try to build it not from the top down but from everywhere to everywhere else.
When Larry Page and Sergey Brin created Google, they did so not only with a view to making money but also with an understanding that they were creating a force for radical change. If the company’s mission to make the world’s information accessible turns out to be a little hopeful in its assessment, the founders of Google still offer a prototype understanding of how we should see enterprise and action. We have to learn to make choices with an eye not just to today, but to tomorrow, and not just to ourselves, but to the wider society. The sum and interactions of our choices make the world.
With distributed power, it seems, comes distributed responsibility.