Social Media Giants Have a Big LGBT Problem. Can They Solve It?
YouTube, Tumblr, Instagram, and others have caused offense to LGBT users in recent months. Homophobia is partly to blame. So are technical snafus, and corporate and human failure.
Lately, social media giants have been doing a lot of apologizing over LGBT foul-ups.
Each individual incident has sparked outrage and generated national headlines, but taken together they paint a picture of an emerging crisis for these companies: When marginalized groups depend on your platform to build community, relying too heavily on algorithms can have unintended—and sometimes hurtful—consequences.
The roots of that crisis came into clearer focus last March when LGBT YouTubers noticed that many of their videos had been categorized as “restricted”—or “potentially inappropriate” for viewers who opt to turn on a content filter.
The company apologized on Twitter: “Sorry for all the confusion with Restricted Mode.” In a subsequent statement to The Daily Beast, YouTube explained that “some videos” had been “incorrectly labeled by our automated system.”
Sometimes, the symptoms of this crisis appear to be complex but are simple. In June 2017, Tumblr said it was “deeply sorry” for categorizing some LGBT-related posts as not suitable for work, or NSFW, in its Safe Mode. But the underlying issue, Engadget explained, was that Tumblr had flagged as inappropriate all posts from users who had marked their blogs as explicit, whether or not the posts themselves were NSFW.
But more often than not, these PR hiccups appear to have been due to technical systems that are behaving in unanticipated and convoluted ways. In November 2017, Twitter apologized after images posted under the hashtag #bisexual were filtered out of search results.
In a tweet, the company later blamed “a technical issue,” explaining that it tries to “identify sensitive media” by compiling “a list of terms that frequently appear alongside adult content.” In layman’s terms, because “bisexual” sometimes appears in pornographic posts, it was deemed guilty by association and put on the naughty list.
That list “was out of date,” Twitter explained, and “had not been maintained and incorrectly included terms that are primarily used in non-sensitive contexts.”
This year, especially, the limitations of leaning on code to moderate the massive amount of information that flows through these platforms have become readily apparent.
In June 2018, as Forbes reported, many LGBT YouTube creators noticed that hurtful anti-LGBT advertisements were running on their videos. YouTube told Forbes at the time that they were “looking at ways to improve our policies going forward.”
Then, at the end of Pride Month, YouTube issued a mea culpa over both the anti-LGBT advertisement snafu and for a separate, more long-standing controversy: Many LGBT YouTube creators have experienced demonetization, meaning that they cannot make money off some of their content because it is deemed inappropriate for advertisers.
YouTube admitted on Twitter that it had “let the LGBTQ community down.” Its promise: “We’re sorry and we want to do better.” Once again, “systems” are in part to blame: As the Verge noted, YouTube has previously explained the demonetization issue by saying that its “systems get it wrong” in some instances.
In July 2018, when Yelp stopped its search box from including suggestions that included anti-transgender slurs like “Tranny Bars” and “Shemale Clubs” shortly after The Daily Beast brought the situation to the company’s attention, systems were once again the culprit.
The company told The Daily Beast that the phenomenon was “a machine-generated error,” saying that those particular searches were rarely used “in the huge volume of search queries” that it receives, but that its “computer-generated models still try to match” even more obscure terms.
Some of these LGBT problems are isolated and easily remedied. In July 2018, for example, Instagram apologized after LGBT outcry over the removal of a photograph of two men locking lips.
The men claimed that they were initially told the post violated “community guidelines.” Instagram then said that it was “removed in error and we are sorry.” The post quickly went back up.
But other times, the problems are widespread and insidious—and only caught long after the fact. In August 2018, for example, the Telegraph reported that some young LGBT Facebook users had been receiving ads for conversion therapy programs. The company told the Advocate that it is “always working to find and remove ads that violate our policies.”
But if anti-LGBT ads are making it onto the platform in the first place, automated systems appear to be at least partially at fault. Facebook reportedly relies on both “machine learning systems” and “human reviewers” in order to flag down insensitive ads, as the company told the Verge last year in a statement about improvements made to the platform after ProPublica revealed that investigators had been able to buy Facebook housing ads that openly discriminated on the basis of race.
However, it’s not always easy to discern the precise mixture of human wisdom and computer processing that these companies use to regulate the enormous amount of content that gets submitted to their platforms on an hourly basis.
Earlier this month, for example, Facebook apologized after The Washington Post revealed that several LGBT-themed advertisements had been blocked because they were being interpreted as political. As the Post noted, the company would not say “how much of the filtering was driven by algorithms rather than human monitors.” The secret sauce remains secret.
But what has become quietly clear over the past two years—as social media companies deal with a broader deluge of misinformation from bad actors—is that LGBT people aren’t being properly served by the very platforms that, in many instances, have allowed us to connect with each other.
All of the companies mentioned above have something very specific in common: They—or their parent companies—have a perfect 100 score on the Corporate Equality Index, a measure of LGBT-friendliness published by the Human Rights Campaign.
That means all of them offer transgender-inclusive health benefits, publicly engage the LGBT community in a positive way, and have non-discrimination policies that include both sexual orientation and gender identity. On paper, they rank among the best places for LGBT people to work.
Sometimes they even file amicus curiae briefs in support of LGBT causes: Google, Facebook, and Twitter signed onto one in support of same-sex marriage in 2013; Twitter added its name to a brief in support of transgender teenager Gavin Grimm in 2017, as he sought the right to use the boy’s restroom at his school.
So why have all of them recently run into some form of LGBT-related trouble?
Daniel Faltesek, a new media communications professor at Oregon State University and the author of Selling Social Media: The Political Economy of Social Networking, says social media companies are failing to look after their most important commodity—the platforms themselves.
“I believe the corporations and their management really are [LGBT]-friendly,” he said, “but the problem is that they have allowed their basic product—that user interface where people find each other and make meaning together—to drop to the bottom of their priorities list. And they’re starting to pay for it.”
Instead of consulting with communications experts at the onset of the social media era, Faltesek says, many of these companies “sort of went on their own and assumed that things would work out,” placing their faith in naïve assumptions about free speech and the marketplace of ideas: Let everyone say whatever they want online, the underlying theory went, and “the truth would magically come shining through.”
They seemed to be, as Faltesek humorously describes it, “really convinced that the internet is this small-scale version of a Greek agora where everyone can be a warrior-poet.”
As more and more users flocked to these platforms, however, that naïveté was exposed for what it was. The internet wasn’t going to be an exceptional place where facts would prevail over prejudice.
It was, essentially, the equivalent of giving a megaphone to everyone in a crowded public place and encouraging them to talk to each other with very limited refereeing. The dream was the agora; the reality was Times Square.
“I think they got wrong what the internet was,” says Faltesek. “They thought it was something special and new—when it was just more of the same in terms of communication. They also got the scale wrong: they imagined the internet was going to be small and static, not massive and dynamic.”
The 2016 election provided the ultimate wake-up call: Bad faith actors had successfully gamed social media platforms to their own advantage. Conspiracy theories had spread through the populace unchecked. The truth—about everything from LGBT issues to climate change—was getting buried underneath incessantly repeated falsehoods.
The social media giants, as Faltesek puts it, had been “badly played.” And as much as they supported their LGBT employees—and as vocal as they were about LGBT rights—their platforms had become places where anti-LGBT forces could do serious damage.
Faltesek would split the LGBT social media crisis into three subcategories: Problems with search algorithms, problems with advertising, and problems with monetization. Twitter hiding bisexual images or Yelp surfacing “shemale” search suggestions are examples of the first. Conversion therapy ads making it onto Facebook are examples of the second. LGBT YouTubers getting their content demonetized typifies the third.
Social media companies are taking all three of these areas—search, advertising, and monetization—more seriously, post-2016: They want to surface better search results.
With increases in the number of monthly active users slowing down across many services, they arguably need advertising dollars now more than ever. And they need their users to create good content—especially on platforms like Facebook, which many young people are abandoning.
But companies that haven’t always realized—or admitted—that they are actually in the business of human communication may be reluctant to confess that algorithms are not the answer.
Using “computer-generated models” to help users with even the most obscure search requests is how “tranny” ends up as a suggestion on Yelp.
Turning to computers to help assess the quality or content of advertisements will inevitably backfire, as Facebook has proved. And trying to come up with an algorithm that determines whether any given user-generated video would be palatable to brands, as YouTube has learned the hard way this past year, is going to cause trouble—especially for creators whose very lives are the object of the sort of intense political debates that many brands like to avoid.
“At the end of the day, the computer’s not magic,” says Faltesek, simply.
The computer is incapable of making the sort of decisions that, say, an editorial team might. But because many social media platforms are still concerned with appearing to be politically neutral arbiters of open expression, as Faltesek notes, showing too much of a human hand in the realms of content moderation or advertisement approval carries with it a certain lingering stigma.
The intense politicization of LGBT issues only exacerbates the awkwardness with which these ostensibly LGBT-friendly—but also theoretically politically neutral—companies manage their platforms.
LGBT people are only seen as “political” because anti-LGBT actors are trying not only to stop us from acquiring equal rights, but to roll back what rights we have already secured. To demonetize LGBT content for being too laden with “politics,” for example, is to give into an ideological framing forcibly thrust upon LGBT people by the very anti-LGBT groups whose positions many of these Silicon Valley companies have publicly opposed.
Such a stance also sends a discouraging message to LGBT creators who are trying to use the platform to help others feel less alone.
Chase Ross, a YouTube vlogger and transgender advocate who has drawn attention to the demonetization issue, told The Daily Beast that “YouTube has become a safe space for a lot of people in the LGBTQ+ community” but that the struggle of constantly appealing his videos “hurts so much.”
“We have a place to express ourselves and build a community,” he said. “Honestly, it’s a beautiful thing—except when we are told that our lives aren’t ad-friendly? What about my life isn’t ad-friendly?”
One particular sore point for Ross is a statement that YouTube issued to the Verge in response to his theory that the word “transgender” led to immediate demonetization: “Successful appeals ensure that our systems get better and better.”
To Ross, that comes across as a suggestion that transgender people need to help a computer learn not to take their revenue away: “Hire more engineers, fix the algorithm, don’t put it on us and tell us we need to change it. No. You change it.”
Meanwhile, anti-LGBT actors will keep submitting advertisements to social media platforms as long as they are able—and will probably keep succeeding at slipping them through moderation processes. The problem with algorithms, as Faltesek notes, is that they can be “reverse engineered.”
“Those people are constantly playing a strategic game,” he tells me. “They are looking for what the algorithm is today and they’re going to change their videos and change their messaging and change everything until it shows up where they want it to be.”
Faced with relentless anti-LGBT attacks—like ads just euphemistic enough to possibly pass muster—trying to remain neutral is, in fact, tantamount to picking a side.
“They can’t be still on this train,” says Faltesek. “There is no strategic advantage for them not to take a position… The attack is always coming. And they need to stop pretending like it’s not going to come.”
More moderators can be employed to sort through advertising and review content—but this comes at a human cost.
As Motherboard reported in September, Facebook employs about 7,500 moderators to sift through content created by literally billions of monthly active users—and there is now a class-action lawsuit from one ex-moderator who claims that she developed post-traumatic stress disorder from the vile content she viewed.
These companies can also choose to show as little patience for the politicization of LGBT people’s lives on their platforms as they would within their own offices and hallways.
But there’s also a chance that today’s social media giants made misguided choices at the start that they will ride to the very end.
Note that in response to most of the past incidents mentioned in this column, social media companies promised to tweak their systems, update a list, or look under the hood of one algorithm or another. None gave any indication that they needed to fundamentally rethink their approach.
Faltesek believes that real progress—the kind that can stave off decay and disengagement—would require these companies to adopt a fundamentally different conception of themselves.
“They might stop thinking of themselves as just a replacement for an old advertising executive, and might start to see that they really have a high level of responsibility to their communities that they serve—that it has to be fun and rewarding and important to spend your time on this platform.”
But if that doesn’t happen, he predicts, we’ll see more and more users of mainstream social media platforms—especially those belonging to marginalized groups—asking themselves, “Why are we using this? Why is it better than alternatives?”
Ross who has been on YouTube since 2006, tells me that he has no immediate plans to abandon his platform of choice. But that doesn’t mean there’s no scenario under which he would depart: “I think people at YouTube need to focus more on their creators because even though I’m not planning on going anywhere, if it got worse, I would leave.”
As it stands, the LGBT community has served as proof—a cautionary tale, really—that the big problems facing today’s social media titans can’t be solved bit by bit, with apologies uttered and quick technical fixes made every few months as new issues arise.
That would be like sending a lone Roomba to clean up Times Square.