Twitter’s Terrorist Policy
Twitter, Facebook, and YouTube all have company policies that ban bloodthirsty terrorists like Al-Shabab—but are they open to interpretation? Brian Ries reports.
Life ain't easy for an Internet-savvy terrorist.
Twitter has made that abundantly clear over the past 72 hours, as the San Francisco–based social-media company fought back tides of newly opened accounts that claimed to speak for the murderous Islamic militants holed up in a Nairobi mall, where they had killed at least 68 innocent shoppers using automatic weapons and rocket-propelled grenades.
The militants, alleged members of Somalia al Qaeda affiliate Al-Shabab, had barely blasted their way into the upscale Westgate mall when a Twitter account associated with the group began live-tweeting the carnage: "Like it or loathe it! our mujahideen confirmed all executions were point blank range!”
Twitter moved fast to suspend the group from its service, but not before the militants—or someone claiming to speak for them—opened up another, kicking off a game of Whac-a-Mole that continued throughout the weekend and into the start of the week. It's an awkward dance that's happened at least twice before.
The company refused to comment on its actions or thinking in banning the accounts associated with the terrorist group. Reached by phone, a Twitter spokesperson would only tell The Daily Best, “We don't comment on individual accounts for security and privacy reasons." The militants, or their spokesperson, did not respond to an email sent to an address associated with the group.
But a cursory glance at the company's rules, which state that Twitter will not actively monitor or censor user content, raises two red flags that are likely keeping the group banned from the site.
The first, listed under "Violence and Threats," spells it out: "You may not publish or post direct, specific threats of violence against others." With tweets claiming that there would be no negotiations and threatening hostages as commandos attempted to land on the mall's roof, the group was a textbook offender. The second, "Unlawful Use," declares, "You may not use our service for any unlawful purposes or in furtherance of illegal activities," like, say, the mass slaughter of innocents.
While vague, Twitter's rules seem to state no active terrorists may open accounts to spew their propaganda, like what happened during the terrible and bloody siege of the Nairobi mall. Still, it leaves boundaries that are open to interpretation, which suggest that individuals who are merely associated with terrorist organizations are welcome to spew their cyber-hatred unchecked, just as long as there's no specific threats of violence and no illegal activities were committed.
Across the Valley, Facebook and YouTube are also settling into policies that deal with this newfound scourge of social media, with both taking decidedly more authoritative positions against terrorists operating on their platforms.
Asked if a terrorist would be blocked from using Facebook, a spokesperson for the site referred The Daily Beast to the social network's community standards, which state, "Organizations with a record of terrorist or violent criminal activity are not allowed to maintain a presence on our site."
Perhaps YouTube, the Google-owned online video site, which has been dealing with so-called terrorist videos for years, has learned from experience.
Back in 2008 then-senator Joe Lieberman sent Google CEO Eric Schmidt a letter asking the company remove all videos that mention or feature groups associated with terrorism.
"Islamist terrorist organizations use YouTube to disseminate their propaganda, enlist followers, and provide weapons training—activities that are all essential to terrorist activity," Lieberman wrote in the letter, which was accompanied with a list of videos that the senator's staff believed had violated the company's terms and has since been removed from the U.S. Senate Committee on Homeland Security and Governmental Affairs’ website. "YouTube also, unwittingly, permits Islamist terrorist groups to maintain an active, pervasive, and amplified voice, despite military setbacks or successful operations by the law enforcement and intelligence communities," he wrote.
The company removed a few, but not most, saying that those "which did not contain violent or hate speech content" could stay. "We believe that YouTube is a richer and more relevant platform for users precisely because it hosts a diverse range of views," the company explained in a blog post at the time, "and rather than stifle debate we allow our users to view all acceptable content and make up their own minds."
Today the site is a little bit more forthcoming on where the line of acceptable content lies.
Asked Monday if it had an expressed policy regarding terrorists who use the site to communicate, a YouTube spokesperson stated, "We have a zero tolerance policy towards content that incites violence. Our Community Guidelines prohibit such content and our review teams respond to flagged videos around the clock, routinely removing videos that contain threats or incitement to commit violent acts."
The spokesperson added that YouTube actively removes all videos and terminates any account that is known to be registered by a member of a designated foreign terrorist organization (FTO) and used in an official capacity to further its interests, she explained. YouTube reviews videos and channels when they are flagged by users (using the flagging tool located under every video), she said, or otherwise reported as violating its terms.
The list of FTOs is designated by the U.S. secretary of State and is committed to "curtailing support for terrorist activities and pressuring groups to get out of the terrorism business," the State Department states on its website. The list includes Al-Shabab (added March 18, 2008), Hamas (October 8, 1997), and al Qaeda in the Arabian Peninsula (January 19, 2010). Terrorist organizations that have been delisted, and thus assumed to be welcome back on YouTube (but perhaps not on Facebook nor Twitter), include the Japanese Red Army, the Khmer Rouge, and the Moroccan Islamic Combatant Group.
But here’s what is now clear: alongside the spammers, the fraudsters, the hucksters, and the hackers, the people who run the world's premiere social networks in Facebook, YouTube, and Twitter increasingly have one more set of unwanted users to worry about: the terrorists.