At least 22 of the approximately 470 inauthentic Facebook accounts tied to Russia had corresponding Twitter accounts, Twitter disclosed on Thursday in its first-ever public presentation about Russian propaganda hijacking its platform.
In a blog post summarizing its presentation for House and Senate intelligence committee staffers, Twitter said the discovery of the 22 accounts identified a further “179 related or linked accounts,” but did not specify the scope of “action” it took on “ones we found in violation of our rules.” Many of 22 accounts had already been suspended for terms-of-service infractions, it said.
But the disclosures Twitter provided Thursday were less extensive than an early September Facebook disclosure widely criticized for insufficiency. Twitter did not estimate the volume of its usage it believes is connected to Russian government influence efforts, and suggested that proxy servers and virtual private networks (VPNs) may make authentication impossible. And it focused more on steps it had already taken “to fight suspicious bots” than on examining those’ bots impact.
Increased monitoring, however, now “help us catch about 450,000 suspicious logins per day,” the company said.
Sen. Mark Warner (D-VA), the vice chairman of the committee, went off on Twitter later Thursday, calling their responses during the closed-door meeting “inadequate on almost every level.” Warner said the company needs to publicly answer questions about Russia’s efforts.
“The presentation that the Twitter team made to the Senate intel staff today was deeply disappointing,” Warner told reporters gathered outside his office on Thursday, adding that the representatives showed “an enormous lack of understanding of how serious this issue is, the threat it poses to democratic institutions, and begs many more questions than they offered.”
Warner stressed on Thursday that Russia’s attempts to sow discord and division in the U.S. through the use of digital platforms “did not end with Election Day in 2016. It continues.” He added that Twitter showed “either an unwillingness to take this threat seriously, or a complete lack of a fulsome effort.”
After innumerable harassing tweets and bot-borne hashtags of disinformation, it was Twitter’s turn—however preliminarily—to answer investigators about Russian propaganda hijacking its platform. It was the first time since their inquiries turned their focus to Twitter following a very hot summer for social-network giant Facebook, which is enduring its first wave of relentless criticism from Washington for obscuring Russia-tied inauthentic electioneering.
And it was another sign that the political class is reckoning with the emerging reality that the same social networks that helped them win an election can just as easily provide a surreptitious forum for foreign interference. Facebook, Twitter, and Google possess intensely scalable tools for identifying voter preferences, thanks to the scads of data their hundreds of millions of users provide, and aiming messages based on that flood of data.
Twitter appeared to stop short of the open-ended review for investigators that Facebook is conducting. The social-media company couched its presentation for Congress as preliminary and its efforts at ensuring the integrity of tweets as a work in progress: “With hundreds of millions of Tweets globally every day, scaling these efforts continues to be a challenge. We will continue to look into these matters on an ongoing basis, and we fully anticipate having more to share as we look into further requests for information.”
It was unclear ahead of the meeting with investigators how much substance would be discussed. One source close to the committees expected on Wednesday that the meet would be mostly to negotiate the parameters of upcoming public testimony. But committee members were silent on what was discussed during the sit-down, which lasted more than three hours.
The House intelligence committee announced its intention to hold an October hearing with tech firms. Its Senate counterpart scheduled one for November 1.
That testimony is highly anticipated, as the tech giants have said next to nothing publicly about Russian activity on their platforms. “Congress and the American people need to hear this important information directly from these companies,” the heads of the House’s Russia inquiry, Republican Mike Conaway and Democrat Adam Schiff, said yesterday.
Richard Burr, the North Carolina Republican chairing the Senate intelligence committee, said the social media companies have been “extremely helpful to the committee,” and he expects to learn more from them in a public setting.
“If I knew all the answers, then there would be no need to invite them in,” Burr continued. “I think this is of such significance in the election, so that’s the reason we chose to do it in a public setting because it does not involve really any intelligence products. So it is something that can be aired, discussed, and thoroughly vetted in an open setting.”
Facebook, under relentless pressure following months of dismissing its relevance to the election, last month partially disclosed the existence of thousands of paid Russian propaganda ads—without divulging their content.
“I think a lot of it comes from the passion that you have to look for it,” Burr said of Facebook’s acknowledgement of the Russian efforts on the social media site. “And I think now we’re in a different phase. And they learned a lot from the French elections about the manipulation of social media platforms.”
The Daily Beast and others have confirmed that the surreptitious Russian-linked ads and inauthentic Facebook accounts promoted physical rallies in 17 cities for Donald Trump; spread anti-refugee and anti-Muslim propaganda to white right-wing audiences; and impersonated real U.S. Muslim groups to send anti-Hillary Clinton and anti-American messages to unsuspecting audiences that could be expected to oppose Trump. CNN reported that the Russians impersonated Black Lives Matter to promote “gun rights” and anti-immigrant sentiment.
About a quarter of those were geographically targeted, Facebook said, raising the as-yet-unanswered question of whether and how Russian propaganda selected areas expected to go Democratic—as well as if Russia and Trump’s Facebook targeting lists converged. Facebook has insisted publicly it is poorly positioned to determine collusion between Trump and Russia, a stance investigators and other observers disbelieve.
Yet Twitter, like Google, has said even less about election-time Russian activity on its platform. It has disclosed far more about doubling the character limit on tweets than about any effort to root out inauthentic or malicious activity. Schiff recently said that Americans are only aware of “a subset of a subset” of online Russian election interference.
But researchers have centered on Twitter as a particular vector for fake news and laundered Russian messaging targeting American audiences. A former FBI agent, Clint Watts, testified before the Senate’s Russia investigation about Twitter bots and human-driven accounts amplifying RT and Sputnik news stories that ranged from anti-American framing to outright falsities. Among the profile keywords the inauthentic users employed to build trust amongst the American right, which for generations considered Russia an adversary, were “God, military, Trump, family, country, conservative, Christian, America and constitution,” Watts found.
Russian propaganda delivered to Americans on social media seek to heighten ethnic divisions, play off security fears like domestic terrorism and increase anxieties about social and economic decline, researchers say. They can carry offline impacts, from Americans showing up unknowingly at Russian-promoted rallies to causing what Watts said were observable “stock dips which allow all sorts of predatory trading and other things to happen.” Knowing that editors in U.S. newsrooms obsessively monitor Twitter trends, Russian trolls promote fake stories in the hope legitimate journalists pick them up.
Twitter said Thursday that it in the coming weeks and months introduce unspecified new integrity guards, including “new and escalating enforcements for suspicious logins,” but seemed to caution against an expectation of a major impact they would have on the platform.
Citing a report finding 85 percent of global email is spam, the company said: “Obviously email is very different from Tweets, but it’s important to understand the scale of what we are dealing with, and that this is a global issue for all platforms.”
Lawmakers are already considering ways to combat future efforts by Russia or other foreign actors to meddle in American elections. Sens. Warner and Amy Klobuchar (D-MN) are piecing together legislation that would require major platforms such as Facebook to publicly disclose the organizations or individuals who purchase more than $10,000 in election-related advertising.
Burr dismissed the idea, calling it an overreaction that isn’t under Congress’ purview.
“It’s extremely odd to me that you could have a legislative remedy before you knew what the problem was,” Burr said. “Foreign money in U.S. elections—I don’t care how it comes in—is illegal. So this may at the end of the day not be a congressional issue. It might be a [Federal Election Commission] issue. And I don’t think that you need legislation to, in any way, further stipulate that foreign money in U.S. elections is illegal.”