The bots bite back.
Earlier this month, researchers documented how a series of pro-Kremlin social-media users and outlets, along with so-called alt-right personalities, amplified a certain rhetoric around the Charlottesville protests. According to the Atlantic Council’s Digital Forensic Research Lab (DFR Lab), sites like Sputnik, RT, and Infowars were framing the events in Virginia within a long-running narrative that Nazis led Ukraine’s revolution. Included in that mix were Twitter bots—automated accounts set up to do their controller’s bidding—that attempted to spread the message further.
But now, the bots have turned their sights to the researchers themselves, with fake profiles, a storm of dodgy followers, and a bombardment of the analysts’ Twitter feeds.
“We exposed some of them, in their existence, over the weekend. I think this was a way of intimidating us,” Maks Czuperski, the director of DFR Lab, told The Daily Beast.
On Aug. 18, DFR Lab published an analysis on how U.S. alt-right platforms mimicked the sentiment of pro-Russian outlets concerning Charlottesville. The following week, ProPublica picked up the story, but something strange happened: Apparent bots quickly retweeted the article thousands of times.
A day later, an account with just 74 followers described investigative journalism news operation ProPublica as an “alt-left #HateGroup and #FakeNews site funded by Soros.” That tweet racked up some 23,000 retweets, seemingly from a group of bots. A similar tweet managed to grab more than 12,500 retweets. Ben Nimmo, a senior fellow at DFR Lab, then wrote his own analysis of the tweets against ProPublica, and a guide on how to spot a bot.
Those retweet bots don’t really help propagate a tweet: Most probably don’t have any followers who are real users. Instead, their goal is likely to saturate a target’s notifications.
“They are not amplifying the accounts, but what they are doing is intimidating the users,” Nimmo told The Daily Beast. “They’re standing in an empty room, shouting really, really, loudly.”
But things got weirder.
“The Atlantic Council’s tweets, which are normally retweeted a couple dozen times, got retweeted almost 108,000 times and some of us got loads of fake new followers,” Donara Barojan, also from the DFR Lab, told The Daily Beast. She gained more than 1,000 new Twitter followers, most of which appeared to be automated accounts.
Barojan said most of the bots that followed her don’t tweet. But the automated accounts have been on Twitter for years.
“What’s interesting is that a lot of them have been created more than two years ago, which goes on to show how little oversight there currently is,” Barojan said. One purpose of this might be to attack the researchers’ credibility on Twitter, by flooding their follower list with a load of fake users.
“DFR Lab does not deploy or utilize bots in any capacity because it would undermine the credibility of our research. We research social media, disinformation, security issues, and where each intersect. It is meta, but perhaps inevitable, that our research on bots would boomerang back to our team,” Graham Brookie, deputy director and managing editor of the DFR Lab, told The Daily Beast.
Someone also created accounts attempting to impersonate, or at least mock, employees of DFR Lab and their research. One mirroring Nimmo audaciously claims DFR Lab’s research found Russians used a secret weapon to cause Hurricane Harvey. A second impersonated Czuperski, the DFR Lab director.
“Our beloved friend and colleague Ben Nimmo passed away this morning. Ben, we will never forget you. May God give you eternal rest. RIP!” one of that account’s tweets read, including a picture of Nimmo.
As of Tuesday afternoon, bots have retweeted that message over 21,000 times. Bizarrely, before today, that account had not tweeted since 2011; it had laid dormant for years.
When asked if Twitter plans to introduce any other measures to stop the creation and use of bot accounts, a company spokesperson pointed to its previously published blog post on the topic, and the company’s policy on automated accounts.
“It’s a way of them showing ‘Hey, we’re here, we’re watching you,’” Czuperski said.