03.05.13

The Internet Won’t Save Us: Evgeny Morozov’s Stand Against Technology Solutionism

Technological solutions for everything is the mantra of our age but Evgeny Morozov is here to say, STOP. He speaks about Google’s social responsibility, what next for the internet, and why he avoids Facebook.

Evgeny Morozov has distinguished himself as one of the most thoughtful—and controversial—observers of technological trends. His 2011 book The Net Delusion laid waste to “the utopian myth of the Internet as a liberator” for those living under authoritarian regimes. In his latest book, To Save Everything, Click Here: The Folly of Technological Solutionism he takes a critical look at our fetish for high-tech fixes. As is his custom, he has harsh words for those on all sides of the debate.

I spoke with him about his new book, his problem with the term “the Internet,” and why he isn’t on Facebook. What follows is an edited version of that conversation.

One of the things you have argued is that there is something deeply wrong with the way we talk about the Internet. Or “the Internet” as you put it. What exactly is so confused about the way we use this term?

I think initially, when we started talking about the Internet the only reference to it was very technical. It was a reference to the actual physical network. But with time, all sorts of other debates and assumptions got merged into this one giant mega-concept of “the Internet.”

You know, debates about cyberspace for example. If you look at them very closely, in the late 80s they had very little connection to debates about the Internet or the World Wide Web. They were all about putting on glasses and exploring a reality that existed outside of the physical world. Gradually, however, those debates got merged with debates about the online-offline world. Which themselves, to me, are highly cultural historical debates.

You know, it’s not a given that there is an “online” and “offline” world out there. When you use the telephone, you don’t say that I’m entering some “telephono-sphere.” You don’t say that, and there is no obvious need to say that when you are using a modem.

There is this complete disregard of important social questions like privacy when companies like Google experiment with new services like Google Buzz or, you know, Google Street View.

So there is this very bizarre way in which the abstract categories we are trying to analyze dumb down our debate. Because we no longer pay attention to the things that matter. Which, to me, all boils down to business models, political economy, notions of the self, notions of collective action, notions of public debate that are embedded in these technologies.

And instead we engage in highly abstract debates about “Internet culture” and “the Net,” finding coherence where, frankly, there is no coherence whatsoever.

You are also very critical of what you call “technological solutionism.” You point out that many of the problems that digital technologies have been designed to ameliorate may not be problems necessarily. And I think the hardest case you have to make has to do with instances of government opacity, or political hypocrisy, or the ambiguity of political discourse. These are things that common sense tells us we should use technology to fight against. And you argue that some of these things are actually useful.

So, with regards to solutionism, I see two features to it. One feature is something you just mentioned and it’s the fact that we often treat problems as problems, when in fact they’re not problems at all. They are actually not bugs they are features, you know, to use computer-speak.

The second feature is that, you know, even for problems that are problems, not all solutions are born alike.

With regards to politics, what I try to show is that there is a very bizarre tendency to embark on these solutionist initiatives simply out of the shear sense of the awesome possibilities that these new technologies allow and not based on some careful thinking about the very limitations of the political project.

Yes, occasionally politicians have to be ambivalent about their promises, in part because they have to appeal to many different constituencies. And they need to negotiate with very different factions and very often they need to say things that have double meanings, which are deliberately phrased in an ambiguous manner to leave them enough space to negotiate.

So, when you have projects that rely on Big Data and technology and soon, perhaps, even facial recognition analysis and emotional analysis that will show that [politicians] are being dishonest or that they say something that contradicts what they said ten or fifteen years ago, that will just shrink the space that they have for negotiation. And again, I just don’t think that that will necessarily make politics better.

But it strikes me that you are defending a double standard here. It’s alright when political institutions craft solutions to social problems, but not when Silicon Valley types do it.

Yeah. I mean, I don’t think it’s a double standard. Again, it’s – I wouldn’t put it that way.

I have no problem with technological solutions to social problems. The key question for me is, who gets to implement them and what kinds of politics of reform do technological solutions smuggle through the backdoor.

So, when a Silicon Valley company tries to solve the problem of obesity by building a smart fork that will tell you that you’re eating too quickly, this is a very particular type of solution that basically puts the onus for reform on the individual. You treat the fact that the food industry might be manipulating the advertising, that you might not have access to farmer’s markets in your area – all of those questions are kind of bracketed out, and you as a consumer or citizen are basically forced to accept the current system as a given.

It’s a very particular type of micro-level solution to a problem that, I personally think, should probably be solved through a macro-level solution. And that solution might involve something that might be solved through technology. But it will also involve reforms of the food industry, investments in infrastructure.

And what also troubles me a lot is that, increasingly, as technology companies in Silicon Valley accumulate too much data about us, and they more or less mediate our every interaction with the world, a lot of policymakers will say “why should we bother with this macro-level reform to begin with, if we can just outsource all of these reforms to Google and Facebook.”

You have often mentioned this pervasive idea of the Internet as eternal and sacrosanct. As someone who rejects that view, play futurist for a second: What kind of technologies could displace the Internet?

I think that definitely the underlying network will probably stay for a while until we find other ways to interconnect our gadgets.

What I expect to see in the next five to seven years is the migration of Big Data and the algorithms that have been developed in the context of Facebook and in the context of Google, into the world at large -- into the physical reality. I want to do my next book on the future of public space in the era of smart technologies. Because I think that, ultimately, all of that will break from the purely virtual connections into mediating how we interact with houses and buildings and public squares and shops.

What I do on Facebook will be integrated with what I do when I go to the store. It will be integrated with what I do when I drive my self-driving car. It will be integrated with what I print on my 3D printer, and so forth.

You are concerned that technology companies rarely give enough thought to the social implications of their products. What are some of the responsibilities that maybe Google or Facebook or companies such as this have been shirking?

So one critique that I advance in the book is that there is this complete disregard of important social questions like privacy when companies like Google experiment with new services like Google Buzz or, you know, Google Street View. As Mark Zuckerberg once put it, their mission is to break things and then think about consequences later.

And I think, clearly, we need to become a little more cautious about our celebration of innovation. And also think through – you know – sometimes innovation is disruptive but it’s also disruptive in a very political sense that disrupts a lot of things that we actually value and hold dear and that we need to think about them before we unleash our innovations onto the world and not afterwards. Because what happens afterwards is that we are told that, “hey, technology is here to stay, and we just need to adapt our norms.”

Some might argue that it isn’t really feasible for technology companies to anticipate all of the implications of a new product. Is that asking too much of them?

This is the usual defense that these companies make and I just find it completely shallow and unconvincing. Again, if you look at Google, it doesn’t take a lot of brainpower to understand that if you list everyone I’ve been emailing with publicly, it will probably have some negative effects on my sense of privacy. Which is what they did when they released Google Buzz.

I mean, it has very little to do with the unanticipated uses. It has to do with complete disregard for issues that may not matter in Silicon Valley, where everyone is very happy. And apparently no one has any marital problems.

But you do believe that there is a role for technology in changing social norms. You are talking here about this idea of “adversarial design.”

I’m talking about adversarial design and this notion of products as troublemakers as opposed as problem solvers. I think that there is something Big Data and sensors can introduce into how we interact with technology. They can present us with more choices. They might actually turn us into more moral human beings.

A good example of this is the caterpillar extension cord that you write about in the book.

Yes. It doesn’t rely on Big Data, but this example of a caterpillar-shaped extension cord which, if you leave devices in standby mode, it will start twitching as if the caterpillar was in pain. To me, it’s a nice way of alerting you, as someone who is a user of electricity, that there are many more issues involved that designers have tried to hide from you. They would rather you not think of devices in standby mode and would rather make that extension cord as invisible as possible.

And it’s this very paradigm that has brought us to a point where we think about energy – and even think that cloud computing – as being provided by some invisible infrastructure we no longer have to care about. I’m not sure how far we will be able to go with that paradigm in the future. If we will be replacing that paradigm, then I think that making technologies into these triggers for deliberation and reflection is not a bad place to start.

Are there any technologies you have chosen to avoid for philosophical reasons. For instance, are you on Facebook?

I’m not on Facebook. I have a sort of anonymous account that I check like once every six months every time Facebook rolls out a new feature. I decided not to get on Facebook because, in addition to the obvious benefit of me being a public figure and promoting my stuff, I just couldn’t find a good reason to use it. And I could find many reasons not to use it. The fact that they would collect all of my personal data is one reason. But the fact that I would be wasting so much time checking what my friends are doing is another one.

I’ve also tried to be very strategic about how I use technology. Last year, I bought myself a safe which has a timer in it. So you can actually lock it and set a time, and it will not open [until that time] even though you have the code and have the key to the safe.

So I use the safe to hide the cable for my Internet router and my iPhone when I need to be working. So, for example, throughout most of January, I would only check my email at about 11pm.

That’s a very low-tech solution.

It’s a very low-tech solution, but it works.