The Sri Lankan government’s decision to block all social media sites in the wake of Sunday’s deadly attacks is emblematic of just how much US-based technology companies’ failure to rein in misinformation, extremism and incitement to violence has come to outweigh the claimed benefits of social media.
Sri Lanka’s government moved to block Facebook, WhatsApp and Instagram – all owned by Facebook – on Sunday out of concern that “false news reports … spreading through social media” could lead to violence. The services will be suspended until investigations into the blasts that killed more than 200 people are concluded, the government said. Non-Facebook social media services including YouTube and Viber have also been suspended, but Facebook and WhatsApp are the dominant platforms in the country.
For Facebook in particular, Sri Lanka’s decision represents a remarkable comedown from a time, less than three years ago, when Facebook was viewed as “one of the world’s most important emergency response institutions”, as Wired magazine wrote at the time.
The social network’s vast global scale, its intricate mapping of social relationships, its algorithmically triggered “safety check” product and its suite of tools enabling the rapid dissemination of information and live video were at the time viewed as a potential boon to global disaster response. Survivors could use Facebook to mark themselves safe; governments could use Facebook to broadcast live updates; and NGOs and locals could use Facebook to coordinate relief efforts.
But the same features that make Facebook so useful have also made it incredibly dangerous: misinformation travels just as fast as verified information, if not faster.
Recent weeks have provided numerous examples of just how damaging that can be in the aftermath of a crisis. The Christchurch gunman who shot and killed 50 Muslim worshippers, used Facebook Live to broadcast his attack, effectively weaponizing the platform. Facebook and YouTube both failed to prevent the video from spreading.
Just this week, a YouTube feature designed to curb misinformation ended up creating it, when the platform appended a panel with information about the September 11 terrorist attacks to livestreams of the Notre Dame fire, creating the false impression that the fire was linked to terrorism.
In Sri Lanka, misinformation spread through social media has been linked to deadly mob violence before. In March 2018, the government blocked Facebook, WhatsApp and other internet platforms amid hardline Buddhist violence against Muslims, some of it fuelled by hate speech and false rumors spread on social media. At the time, the Sri Lankan government harshly criticized Facebook for its failure to properly police hate speech in local languages.
The risk of violence versus access to free information
Facebook’s struggle to address misinformation in these instances can be cultural and geographic, said Vagelis Papalexakis, an assistant professor of computer science and engineering at the University of California. American-founded companies often lack the local resources and native language experts to better understand how to censor misinformation in real time.
“What needs to be addressed is if I have a very well-polished system that works for English and is optimized for the US, how can I successfully ‘transfer’ it to a case where I’m dealing with a language and a locale for which I don’t have as many examples or human annotators to learn from?” he said.
Still, simply blocking social media is no panacea for misinformation, warned Joan Donovan, director of the technology and social change research project at Harvard Kennedy’s Shorenstein Center.
“We know based on the past that in crises, everyone goes online to find information,” she said. “When there are large-scale fatalities and multiple emergencies, it’s very important for people to be able to communicate and feel safe … This really puts people who already have vulnerable access to communication in a much worse position. It is a dangerous precedent to set.”
Sweeping social media bans are rare, but governments have used varying techniques to block social media and other websites from the internet in the past. One way a government can do this is by asking internet service providers to block IP addresses associated with the service, as was previously done in Russia in an attempt to block encrypted messaging app Telegram.
Internet users in Sri Lanka have reportedly been able to access some of the blocked sites using virtual private networks, tools that route traffic through different servers, suggesting that the blockage was carried out by URL-specific bans, rather than by DNS or IP address. Some have suggested this could mean people attempting to spread disinformation will find ways to do so while those seeking news online will continue to be fed falsehoods.
“This is a complicated issue, and I can see why the government has done it, but it is sad that the threat of racial violence has to be weighed against access to free information,” John Ozbay, CEO of privacy and security tool Cryptee.
Donovan said that part of the challenge for governments was their lack of insight into what is happening on social media platforms in real time.
“Governments are in a tough spot because they have to do this really powerful overreach in order to do anything,” she said. “Because platform companies are not good at content moderation and threat assessment, we are going to keep backing into this scenario any time there is a crisis.”
The powerful in tech…
… must keep being challenged with bold investigative journalism. It’s been a year since The Observer and The Guardian broke the story that became the Cambridge Analytica scandal, exposing the truth and shedding light on the reality of foul play within the tech industry. We saw how personal data could be harvested on an unprecedented scale to fulfil the ambitions of the powerful. Through this courageous investigative reporting, we shamed Facebook, and prompted a global conversation about the importance of data privacy, holding tech companies to account and pressuring governments to enact regulation.
The Guardian is committed to continuing this vital work; we will keep persevering, uncovering and challenging those with so much power in the tech industry. This has never been so pressing: we’re living in a time when the integrity of our democracy and the legitimacy of our votes are in question. Political campaigns reside in our many digital feeds and, with each year, this will become ever more prominent. The world needs journalism that promotes transparency and investigates where others won’t go. Reader support means The Guardian can keep investigating the critical issues of our time.
The Guardian is editorially independent, meaning we set our own agenda. Our journalism is free from commercial bias and not influenced by billionaire owners, politicians or shareholders. No one edits our editor. No one steers our opinion. This is important as it enables us to give a voice to those less heard, challenge the powerful and hold them to account. It’s what makes us different to so many others in the media, at a time when factual, honest reporting is critical.
Every contribution we receive from readers like you, big or small, goes directly into funding our journalism. This support enables us to keep working as we do – but we must maintain and build on it for every year to come.
[“source=theguardian”]