SOCIAL platforms are scrambling to cope with the onslaught of political messages involving celebrity endorsements, bots and manipulated videos as the US election campaign is shaken up by Democratic candidate Michael Bloomberg’s deep-pocketed efforts.
The surge in questionable political content comes as online giants struggle to curb disinformation and foreign influence campaigns which came to prominence in the 2016 election.
While Twitter has banned candidate ads and Facebook has moved for more transparency, none appear prepared for new digital efforts – including memes and paid endorsements – that skirt the rules to get campaign messages to as many people as possible.
“The social media platforms don’t have a good handle on how they are going to define political advertising,” said Boston University professor Michelle Amazeen, who specialises in political communication.
Billionaire Bloomberg’s entry into the Democratic presidential race has created new challenges for social networks by using paid celebrity “influencers” and “digital organisers” to post messages about his campaign.
Mr Bloomberg has spent more than US$56 million on Facebook alone, and US President Donald Trump some US$25 million.
“The Bloomberg campaign has taken us into uncharted waters”, testing social networks’ policy on deception and manipulation, said Emerson Brooking, a researcher at the Atlantic Council’s Digital Forensic Research Lab. Mr Brooking explained that paying an army of social media users to post on his behalf borders on deceptive because it “is intended to create the appearance of a digital grassroots that may not exist.”
Twitter and Facebook have said they allow some of these messages but that they should be labelled as “paid partnerships” or “branded content” while noting that election disclosure rules remain vague on these activities.
Lindsay Gorman, a researcher at the Alliance for Securing Democracy, a security advocacy group, said social platforms are reacting “on the fly” to the rapidly changing strategies. “We are seeing multiple examples of manipulated media and content, and it is difficult for the platforms to respond to these new tools, so they are making policy in real time.”
Most social media restrictions focus on paid advertising but steer clear of “organic” messages from candidates themselves and their supporters.
“Bloomberg exposed a vulnerability in the platforms,” said Republican digital strategist Eric Wilson.
“It’s like squeezing a toothpaste tube,” Mr Wilson added. “Campaigns want to get their message out and if you cut off ads it moves to a different area, like ‘organic’ advertisers.”
Mr Bloomberg drew attention recently for one video from a debate in Nevada that was edited to show his Democratic rivals apparently dumbfounded, with added sound effects from crickets. Some critics argued the ad should be banned – and Twitter said it would be labelled as “manipulative” under forthcoming rules, even though it was not a “deepfake” altered by artificial intelligence.
Mr Wilson said the ad used widely accepted campaign techniques and would be permissible on television: “I think if you mash up video clips and add crickets it’s not disinformation.”
Another thorny issue for social platforms is dealing with memes which can be powerful messages but also may test the limits of misinformation.
Candidates like Mr Bloomberg as well as his Democratic rival Bernie Sanders, the current frontrunner, are seeking to learn from Mr Trump’s effective use of memes in the 2016 campaign, said Heather Woods, a Kansas State University professor and co-author of Make America Meme Again: The Rhetoric of the Alt-Right.
“Memes are often satirical or layered with inside jokes, so they’re hard to fact-check,” Prof Woods said. “In 2016, memes were central to disseminating or transmitting political information but they were also important for bringing together groups of people to support an idea.”
Memes can be “important persuasive forms of communication” but also may spread disinformation, according to Prof Woods, creating a conundrum for social networks. Memes and other forms of satire are challenging for the platforms, and were used by Russian groups seeking to sow division, according to analysts.
Ms Gorman said platforms “haven’t really thought about this”, but that they should focus on intent rather than format. “I would draw the line at deceptive manipulation,” she added.
Although social networks have had some success in removing automated accounts or “bots”, many still operate in the political arena.
The online tracker Bot Sentinel found tens of thousands of bots active on Twitter, many amplifying messages on behalf of Mr Trump, with Mr Sanders also high on the list.
“It seems to be a vector for people interfering in our elections,” said Mr Wilson. “There is very little cost and it has an impact.”
Ms Gorman said social media platforms have made progress in rooting out foreign actors using bots, but questions whether the same manipulation techniques used in 2016 will resurface.
“The shift to private groups and encrypted communications will influence the prospects for disinformation,” she pointed out, noting that WhatsApp has been used to circulate hoaxes, notably in India. “This trend could make it harder to police disinformation.” AFP