Thousands of internal documents are driving a virtual avalanche of damning news reports on what critics describe as the cruel, profit-focused machine that is social-media giant Facebook.
Now known as “The Facebook Papers,” the redacted documents, memos, presentations, internal discussion threads and charts were obtained by 17 news organizations, and include a slew of new revelations about the company’s internal decisions. They also paint a harsh portrait of reluctance to make changes that would address known issues, including the proliferation of harmful content and hate speech on the platform.
The documents, a combination of Securities and Exchange Commission (SEC) disclosures and leaked documents by way of whistleblower Frances Haugen, appear to have rattled the company. Among other responses, the brand is reportedly expected to soon announce a name change that critics say reflects efforts to circumvent responsibility for harm.
Spokesperson Andy Stone said that the stories painted a false picture of a company harming its users to make a profit.
“At the heart of these stories is a premise which is false,” he said in a statement. “Yes, we’re a business and we make profit, but the idea that we do so at the expense of people's safety or wellbeing misunderstands where our own commercial interests lie.”
Here are some of the most damning allegations to emerge from the papers so far.
Hate Speech at the Core?
According to The New York Times, in an August 2019 internal memo, a group of Facebook researchers identified the company’s “core product mechanics” including features used to optimize engagement, as a “significant part” of why misinformation and hate speech flourished on the platform.
The “Like” and “Share” buttons could “serve to amplify bad content and sources,” another internal study in September 2020 showed, according to the documents. But despite those findings, Facebook CEO Mark Zuckerberg and other executives have largely shied away from changing the platform’s core features to prevent the proliferation of hate speech, although they did test hiding the Like button to “build a positive press narrative” around Instagram, amid separate findings about anxiety in teens, documents show.
That mentality—which Haugen, who testified before Congress, has described as putting “profit over safety”— also fueled internal discussions over the company’s lopsided approach to content moderation.
Facebook has pushed back on the criticism, arguing that the company invested $13 billion in safety and hired more than 40,000 workers focused on it, according to Stone.
Not All Countries Created Equal
According to The Verge, the documents reveal an internal system that split up the world’s countries into tiers that ultimately prioritized protecting users in some countries over those in others.
The countries that were deemed the highest priority and received the top network-monitoring resources included Brazil, India, and the United States, which belonged in what the company called “tier zero.” For these countries, Facebook built “war rooms” to monitor the network and alert local election officials about potential problems with false claims online.
Germany, Indonesia, Iran, Israel, and Italy were slotted into tier one, and were given similar resources, but with less enforcement.
Another 22 countries were grouped into tier two, with fewer resources. The remainder of the world’s countries were relegated to Facebook’s third tier, where the company would reportedly only intervene if election-related content was brought to its attention by moderators.
An absence of machine learning classifiers—specifically those built to recognize hate speech and misinformation in different languages —allowed posts that inspired violence in countries like Myanmar, Pakistan and Ethiopia, the outlet found.
Zuckerberg Accused of Caving to Vietnam Government
Zuckerberg opted to allow Vietnam’s ruling Communist Party to censor “anti-state” posts, effectively handing over control of the platform to the government, sources told The Washington Post. That decision was reportedly made after the Vietnamese government threatened to kick Facebook off its web.
Facebook said it made the move “to ensure our services remain available for millions of people who rely on them every day.”
About That Big Lie
According to Politico, the company fumbled on building a clear strategy for combating content aimed at delegitimizing election results in the United States. Which, with Trump's Big Lie powering a post-election insurrection including the Jan.6 riot, proved to be a serious problem.
Many of the offending posts were flagged for containing “harmful non-violating narratives,” which did not break the company’s community rules, documents reviewed by the outlet showed. But employees expressed outrage on internal message boards over efforts by company leadership to bypass common-sense changes “to better serve people like the groups inciting violence,” on Jan. 6, according to one employee. “Rank and file workers have done their part to identify changes to improve our platform but have been actively held back.”
According to CNN, internal Facebook communications described how women were trafficked on its platform, some of them enduring sexual abuse and kept from escaping while going without food or pay.
In 2018, Facebook employees flagged Instagram profiles that appeared to sell domestic laborers, but internal documents reviewed by the outlet from September 2019 showed little effort by the company to address the problem.
After a threat from Apple to remove the app from its store in 2019, Facebook made some efforts to remove the content, but the company continues to be plagued by domestic servitude content, the report found.
In February, internal researchers said in a report that labor-recruitment agencies communicated with victims via direct messages and that the social-media platform needed more “robust proactive detection methods” in order to prevent recruitment, CNN said.
In a letter to the United Nations on the subject last year, Facebook said it was working to develop technology to address “domestic servitude,” and also insisted that cases of servitude were “rarely reported to us by users.”