FOLLOWING THE terrorist attacks in Paris and San Bernardino, Calif., President Obama and French officials have said they want U.S. Internet technology companies to do more to resist becoming terrorist tools. Congress, too: Sen. Dianne Feinstein (D-Calif.) quickly reintroduced a bill that would require social media firms to report “any terrorist activity” — vaguely defined — to U.S. authorities.
But when it comes to cracking down on social media, governments must tread carefully.
Advocates for strong intervention cite child pornography as an analogy. Social media sites use special software that compares uploaded photos to a database of images kept at the National Center for Missing & Exploited Children. The system can automatically block offending material.
But weeding out terrorism-related communications is significantly harder. For one thing, groups such as the Islamic State continually produce content, and the system can only block material it knows to look for. Videos, too, are harder to weed out than still images. More importantly, there are legitimate reasons someone might share material that contains an image of, say, an Islamic State attack: It’s news. Sometimes there is a clear line between content that serves to recruit, retain and train would-be terrorists and material that’s part of a debate about current events that are brutal but can’t be ignored. Sometimes finding the line requires more judgment than an algorithm can exercise. Sometimes an uncomfortable balance must lean on the side of free speech.
States such as Russia use the pretext of combating extremism to justify all sorts of censorship. According to a former Facebook staffer interviewed by Reuters, Russian-speaking Facebook users repeatedly tagged pro-Western Ukrainian accounts for hate speech in an apparently coordinated effort to have their pages limited or removed. On a smaller and less nefarious scale, overzealous law enforcement officers in Western countries would also likely get the balance wrong sometimes.
In this context, Ms. Feinstein’s bill is worrying. Important debate will be inhibited if people fear that Facebook will report them for typing the wrong keywords. To avoid legal liability, social media companies might overreport — or, conversely, do less monitoring for terrorist content in order to claim that they are not government tools that add names to watch lists. Because these are public forums, the government can largely see what’s on them. In fact, it may be more productive to focus on getting more information about possible terrorist activity flowing more quickly to the social media companies hosting it. They can then evaluate whether users are violating terms of service agreements.
This sort of cooperation seems to be what’s emerging. Even before the recent terrorist attacks, Facebook hired screeners to sort out terrorist material, and Twitter suspended Islamic State-related accounts. Observers report that Islamic State activity on Twitter appears to be steady — but not expanding — as users with deactivated accounts must rebuild their audience of followers every time they open a new account. No approach is likely to stamp out the presence of terrorists on social media, but tech companies can limit their audience and reach without a heavy hand from government.
[“source-washingtonpost”]