DIRTY MONEY

The Big Business of Online Abuse

For a select few, Internet harassment isn’t just a way to bully others. It’s a way to rake in the bucks.

12.28.15 5:02 AM ET

It can be a terrifying experience: thousands of individuals, ganging up on a single person to ruin his or her life and, in some extreme cases, drive that person to suicide. But what makes this ugly occurrence even more troubling is that this sort of online mob harassment is sometimes done for profit. For a select few, trolling is a good business.

In recent years, online harassment and disinformation campaigns have become industrialized. This year The New York Times published a stunning exposé on Russian troll farms—anonymous office buildings full of professional internet “trolls,” a generic term for the people who flood social media with intentionally provocative messages. A single “farm” supposedly generated revenues of 20 million rubles (roughly $300,000) each month. These trolls-for-hire were reportedly deployed to spread false rumors about an Ebola outbreak in Atlanta, post pro-Kremlin messages on various social media, and to spread disparaging comments about Ukrainian President Petro Poroshenko. Individuals and groups, including those sponsored by the state, use the Internet to spread propaganda and shut down dissent. Occasionally, we catch a glimpse of the commercial, illicit businesses sprouting up to support them.

Non-state actors are also using technology to orchestrate online hate campaigns. For example, the so-called Islamic State used an Arabic-language mobile app called The Dawn of Glad Tidings to coordinate its social media operation. The app worked by allowing users to temporarily surrender control of their account to ISIS, which used the thousands of conscripted accounts to post propaganda on its behalf.

Of course, most online harassment isn’t coordinated or paid for—it’s just a bunch of people being cruel. But in the absence of attracting thousands of willing followers, it’s possible (and inexpensive, for as little as $5) to purchase thousands of fake followers to make it seem like one lonely person’s rantings are shared by thousands of like-minded abusers. Italian security researchers Andrea Stroppa and Carlo De Micheli estimated in 2013 that fake followers on a social media platform could generate anywhere between $40 million and $360 million annually. Since then, despite a crackdown on fake accounts by prominent tech firms, the business for ersatz fans continues to grow.

The rise of “revenge porn,” the nonconsensual release of explicit images or video, often perpetrated by disgruntled former partners, has spawned a new genre for paid internet porn as well as provided raw materials to the lucrative business of blackmail. The now-defunct website Is Anyone Up?, which first brought the issue of revenge porn to international attention in 2010, is claimed to have generated 30 million page views and $10,000 a month in ad revenue. The prevalence of this form of sexual harassment has prompted governments to draft legislation banning the release of pornographic material without the consent of all participants. But given how profitable revenge porn has become, these laws will only have limited success.

While the business of abuse is booming, innovative models for protection—in many cases founded by victims themselves—are proving viable. Crash Override, for example, publishes simple steps to help victims restore their online reputations. GGBlocker is a Chrome extension that filters out websites associated with a particular harassment campaign to deny the publishers ad revenue and traffic, and Block Together is a digital blacklist that reduces the burden of blocking many abusive Twitter accounts. These tools exist without much support, partially because they’re not profitable, but also because any public supporter risks being targeted themselves.

Observing the impact of online mob harassment, it appears there are too many would-be harassers and too few sustainable solutions. But this doesn’t have to be true. We must reject the inevitability of a toxic Internet.

Imagine if online platforms had the capacity to effectively moderate comment threads and message boards, so threatening and harassing comments were filtered long before they reach their intended target. Imagine if victims of online harassment had as many legal recourses as victims of physical harassment. There really isn’t a digital equivalent of a restraining order. Imagine the benefits of creating spaces online for thoughtful discourse where people didn't have to fear the consequences of eliciting the wrath of online mobs.

But we should use more than our imaginations. We should use our resources, our entrepreneurial spirit, and our collective voices to advocate for tools that make the Internet a place where threatening to rape and murder someone isn’t good business.

Yasmin Green is the head of strategy and research at Google Ideas.