My first day at Facebook felt like summiting a personal and professional mountain. In early 2010, Facebook’s Washington, D.C., office was little more than a loft in Dupont Circle. It bristled with Silicon Valley techie energy transported to the buttoned-up East Coast. We were going to change the world.
By the time I left Facebook in 2012, I had been an intern, an “extern,” a Public Policy Communications associate and a communications contractor. During my tenure, Facebook was still an emerging force on Capitol Hill. We spent much of our time persuading members of Congress to set up accounts and (then-new) Facebook Pages. Early familiarity, the idea went, would help Facebook avoid the regulatory hostility Microsoft faced in the 1990s.
For a time, it looked like that strategy might work.
Under CEO Mark Zuckerberg and COO Sheryl Sandberg, Facebook pursued a strategy of creating or joining “self-regulatory” industry groups—which Facebook then dominated due to its rapid growth and huge ad revenues.
In 2011, regulators expressed growing concern that allowing tech giants to police their own data privacy compliance would concentrate power in the hands of a few major players—Facebook chief among them. A few months later, Facebook launched the “Facebook Safety Advisory Board” to address missteps in its handling of child safety and privacy.
A substantial share of the Facebook Safety Advisory Board’s expert members then—and now—receive funding from Facebook. Larger—and potentially more independent—child advocacy organizations like the Campaign for a Commercial-Free Childhood weren’t consulted on product launches like the 2018 Messenger for Kids app.
Unsurprisingly, Facebook’s hand-picked advisers flubbed their evaluation. Fundamental flaws in the app allowed kids to join chats with unauthorized adults. Parents and child safety groups criticized Facebook’s botched review process. After broad outrage over the rubber-stamp way Facebook’s “independent” safety board approved Messenger for Kids, Zuckerberg pledged meaningful reforms.
The Messenger for Kids debacle, like so many user safety challenges facing Silicon Valley, was an avoidable fiasco. Proper oversight could have saved Zuckerberg the embarrassment and produced a stronger product that actually improved child safety on Facebook. Instead, Facebook chose to preserve its industry influence under the guise of “self-policing.”
Silicon Valley wants Congress to believe its web of interconnected self-regulatory organizations is no different from how the American Bar Association polices lawyers. But the global scope of tech’s activities, and its ability to cause harm to tens or even hundreds of million of users with a single oversight, is a risk potential unlike any private industry has ever known. This year the United Kingdom declared Big Tech’s self-regulatory schemes a costly and dangerous mistake. The United States must follow suit.
I once considered Facebook a universal benefit to American democracy. In the seven years since, I—and former employees including Facebook co-founder Chris Hughes—have watched with concern as the innovative social platform became a 2.8 billion virtual citizen nation-state, capable of funneling vast amounts of disinformation into the global political discourse.
Senator Elizabeth Warren has proposed an ambitious plan to break up Facebook and other tech giants including Amazon and Google. Her ideas include spinning off major acquisitions like Instagram and WhatsApp from Facebook and liberating some of the 79 companies Facebook has acquired in what Hughes calls a pattern of illegal “serial defensive acquisitions” meant to protect Zuckerberg’s behemoth from outside competition.
Even if Warren wins the presidency in 2020, any Silicon Valley antitrust campaign will likely be dead on arrival if Republicans hold control of the Senate. But that shouldn’t mean that regulating Facebook is a fool’s errand.
Facebook’s excesses cry out for muscular federal regulation, and Warren may find surprising Republican support for a plan that polices Silicon Valley while stopping short of a full breakup. In Facebook’s case, stepping back from antitrust arguments in favor of modernizing and strengthening regulation of digital advertising and data protection may be the surest way to secure tangible legislative victories in defense of our democracy and fair market competition.
Outside of an anti-trust framework, Republicans have been willing, even eager, to criticize Facebook’s unprecedented control over news and content. Former Senator Jon Kyl voiced Republican concerns about the “increasing scale and complexity of Facebook’s content moderation practices” in a recent Wall Street Journal op-ed. In September, four Republican senators sent a letter to Zuckerberg bemoaning the opaque manner in which Facebook regulates “appropriate” content.
Democrats may laugh at Republicans’ focus on Facebook’s “censorship” of conservative content, but such criticism represents a rare crack in traditional GOP support for the autonomy of massive corporations.
Just this month, the Senate issued a stinging report on foreign interference in the 2016 election. The Republican-authored report singled out Facebook for the apparent ease with which Russian troll farms co-opted the network’s ad platform to sow mass disinformation.
Days after Zuckerberg’s tense grilling on Capitol Hill, Twitter CEO Jack Dorsey finally decided he’d had enough and announced that Twitter would suspend all political advertising globally. The risks of misinformation – and potential government punishment – were just too real.
Dorsey’s announcement included a clear swipe at Facebook’s anything-goes ad strategy: “It‘s not credible for us to say: ‘We’re working hard to stop people from gaming our systems to spread misleading info, buuut if someone pays us to target and force people to see their political ad…well...they can say whatever they want!’”
Twitter’s abrupt departure from political advertising follows multiple unsuccessful attempts to make the space more transparent and less prone to abuse. But Dorsey now acknowledges such efforts are fated to fail in the face of professional disinformation campaigns growing “at increasing velocity, sophistication, and at overwhelming scale.”
He sounds an awful lot like Elizabeth Warren there.
So, what’s Facebook’s excuse? Dorsey’s decision leaves Zuckerberg as the largest and most visible purveyor of hyper-targeted online political advertising. And Dorsey is right—there is no truly effective way to police digital political advertising. Facebook’s continued participation now smacks of corporate greed more than Zuckerberg’s stated goal of “connecting people.”
Facebook will almost certainly draw increased scrutiny for its decision to continue peddling political ads of questionable provenance and honesty. That’s terrible timing for Zuckerberg, who is already distracted with a major effort to salvage his troubled cryptocurrency project, Libra.
Libra is a prime example of how Facebook utilizes its incredible market power to get ahead of even its own executives. Imagine if Ford released a radically new type of car, but even Ford’s CEO couldn’t explain exactly how the car worked. Imagine Ford executives shrugging when asked about the safety of their new car.
That disconcerting hypothetical looked an awful lot like Zuckerberg’s response to pointed questions about the potential liabilities of his proposed currency. And it goes without saying that a global currency built on an unproven crypto framework has far more potential to cause international problems than a new SUV.
Regulators recently slapped Facebook with a paltry $643,000 fine for its role in Cambridge Analytica’s sweeping data privacy violations. The company paid $5 billion in FTC fines—the largest penalty in history—for failing to protect its users’ data security. That represents only a month and a half of Facebook’s global earnings.
Every privacy violation or foreign misinformation campaign is magnified by the sheer size and market dominance Facebook enjoys. It represents the business equivalent of mixing 100 million unvaccinated children into the American population and trusting that their legal guardian will notice when one starts coughing.
The office at 1666 Connecticut Avenue that once housed the Facebook D.C. team now hosts Oculus, a virtual reality company acquired by Zuckerberg in 2014 that faces its own data security challenges. Facebook has moved on to flashier lodgings, where it spent nearly $13 million last year lobbying Congress for a hands-off regulatory approach.
The men and women I worked with at Facebook weren’t interested in forming a security-compromising corporate monopoly. They had no desire to crush competition and jeopardize American democracy. But big operations have a tendency to yield unintended consequences.
The best intentions of brilliant people did little to stop a largely self-regulated Facebook from becoming the most pressing risk to a healthy marketplace—and a healthy democracy.
Self-policing has given Silicon Valley the cover to expand unchecked while doing little to make our digital landscape more secure. Elizabeth Warren’s plan to bring tech giants like Facebook under a robust federal regulatory regime is the clearest solution. Tech companies may be great at a lot of things. They are no substitute for a strong federal government.