With another US Presidential election on the way in 2020, we can expect the debate around fake news to once again ramp up, and become a key focus of discussion as we look at how political influence spreads online.
But what if fake news isn’t actually the problem?
Sure, it would be easier to be able to attribute the broader shifts in the political landscape to lies and deceit online – that would help explain the more polarizing movements which seem to be gaining momentum, often despite significant evidence against many of their key claims. But various investigations – including my own rudimentary analysis – have actually found that it’s not fake news that’s fueling such, but inherent bias, which is being propped up by the capacity to find others online who agree, and the validation that individuals can receive as a result.
I came across this when I went looking for evidence to support increased action against fake news online – my initial view was that, with the election looming, it would make sense that we should look to increase the pressure on Facebook, specifically, to remove more false news reports, in order to reduce its impact as an element in general debate.
What I found, however, was that it’s rarely so black and white – for example, while there are some clearly false claims circulating through extremist political groups, like this one about Alexandria Ocasio-Cortez, which was picked up and debunked by Facebook’s fact-checkers.
Most of the stories shared in such groups are not so clear-cut, and actually wouldn’t be removed under any fake news policy.
Most of the content being distributed is more like this:
This story is a re-iteration if a long-standing ‘debate’ around what’s an acceptable way to celebrate the holidays, which isn’t really a debate at all. Presidents since the 1950s have chosen, at different times, to use ‘Happy Holidays’ in their messaging, so as to not to alienate non-Christian recipients of holiday mail. This wasn’t really considered a problem till more recently, with President Trump, in particular, making it a larger point of focus, which his supporters now utilize as a key tenet in their nationalist approach.
Elements like this are particularly effective for fueling support on Facebook because it’s a passionate issue, one which inspires people to tap ‘Like’ and to comment in support of such a stance. That engagement triggers Facebook’s algorithm to distribute the post further, in order to spur more of the same, and the story gains momentum, and becomes much bigger through that additional reach.
But it’s not ‘fake news’, it’s more an exaggeration of a specific element. And because it triggers such emotional response, it spreads, solidifying support within certain elements of the political spectrum.
Here’s another example:
Again, the post headline is more misleading than false – the report this actually refers to examines how our food metaphors will likely evolve over time, reflecting broader societal shifts. It doesn’t suggest that vegans are calling for such, but more that this will naturally occur over time.
But that clarification is largely irrelevant – as you can see here, this post has spurred hundreds of comments and shares, because it aligns with a particular pain point, and again, it inspires impassioned response.
Various research reports have shown that triggering high-arousal emotions, like joy or fear, are key to viral distribution online.
Indeed, according to research by Sorbonne University in 2016:
“Articles with a large number of comments were found to evoke high-arousal emotions, such as anger and happiness, paired with low-dominance emotions where people felt less in control, such as fear. The New York Times articles that received the most comments in 2015 all featured emotionally charged, and often divisive, topics: Amazon’s stringent workplace policies, Kim Davis, a police officer charged with murder, the San Bernardino shootings, the Benghazi panel.”
Over time, news outlets have learned that divisiveness can be good for business, which is why we’ve seen increasing polarization among news providers, along with fringe, online publications which have risen up by taking an even more selective, one-sided perspective on certain issues. But as you can see in these examples, the reports aren’t necessarily false, they’re not ‘fake news’ as such. They’re just skewing the information a certain way, in order to play into these dynamics.
Another one:
For climate change deniers, this is a reiteration of their belief – “if the world is getting hotter, how come these boats are getting stuck in ice so thick they can’t get through it?”
The truth of the story is that explorers looking to research the impacts of climate change have actually ended up wading further into such conditions than they would have previously been able, because the ice is too thin for them to anchor into due to the effects of climate change. Because the ice is thinner, they move further in, and some have been caught out in heavier conditions. If anything, the story actually underlines the impacts of climate change, rather than debunking them – but as you can see, the truth is relative, and again, if a story sparks emotional response, it’ll do well, regardless of the actual facts.
But still, it’s not necessarily ‘fake news’. Removing false reports wouldn’t eliminate this.
Then there are more questionable posts like this:
That’s offensive, even verging on hate speech, but it likely doesn’t cross the actual line. The insinuation, however, is clear, and it’ll contribute to existing division, fueling people of certain political leanings. Such tactics are the same ones that Russian-based operatives used to infiltrate US political debate ahead of the 2016 US election, and again, such content will play a key role in 2020. But it’s not ‘fake news’ that’s the problem, it’s over-simplifcation, selective reporting, and playing into existing bias. And that is increasingly difficult to stamp out.
Such findings actually align with Facebook CEO Mark Zuckerberg’s initial response to the suggestion that fake news on Facebook influenced the 2016 vote.
As per Zuckerberg (in November 2016):
“Personally I think the idea that fake news on Facebook, which is a very small amount of the content, influenced the election in any way – I think is a pretty crazy idea.”
Zuckerberg was ridiculed for his comments, and later expressed regret for his wording. But actually, he was probably right – clearly fake news is likely a much smaller contributor to such movements, while skewed reporting, aimed at specific pain points, is more damaging.
Indeed, further academic study has found that:
“Fake news consumption is concentrated among a narrow subset of Americans with the most conservative news diets. And, most notably, no credible evidence exists that exposure to fake news changed the outcome of the 2016 election.”
And that:
“Web-browsing data collected during the 2016 U.S. election suggests that the average American was directly exposed to only a couple of pieces of blatantly false information on social media during the campaign, and that such exposure to misinformation on social media tends to have minimal effects on political beliefs.”
At first blush, this seems like a flaw in data collection, not indicative of the real impacts. But on further analysis, this is likely correct.
So what is the biggest influencer of political movements online?
As per research conducted by the University of Michigan and the University of Vienna, it’s more likely your connections that are driving your political views:
“Most people do not directly follow political pundits or news organizations on social media, yet the majority of social media users are incidentally exposed to news and political information on the platforms. This suggests that exposure to political information – including inaccurate political information – is in large part a result of our social connections.”
It’s sharing through these smaller, micro-networks of people who support each others’ beliefs that enable such narratives to spread, furthering impactful divides.
“If false information is being shared by close friends or family members, people may be less critical of its original source and more inclined to trust the information, regardless of its veracity. Because their defenses are down, individuals may be more prone to believe the misinformation, and even subsequently share it with their social networks.”
These reinforcement loops solidify such perspectives, and the dopamine rush that people get as a result of social likes and responses prompts further sharing. In this sense, it’s less about the accuracy of the report itself and more about what it can do for you.
Does it support your existing belief? Will your connections Like and comment in response?
Again, as Zuckerberg noted back in 2016:
“Voters make decisions based on their lived experience.”
That, ideally, would mean their day-to-day life, how politicians and political decisions impact how they live. But increasingly, the impact we’re talking about on such issues is less about the broader societal impact, and more about the personal validation they can get as a result of sharing a meme.
But most political issues can’t be simplified into an image with a few words. So what then? If people can’t get that dopamine hit, does that make them less engaged in the actual details of key matters? Should political groups simply be looking for more divisive, argumentative angles, and simplifying their policies in line with modern communications trends?
This is the ‘hate machine’ theory of social media, where anger and division reign supreme, and personal validation matters more than facts. And in that scenario, logic – as we’ve seen with vaccinations, climate change, even the ‘flat earth’ movement – matters far less than engagement.
Definitely, we should be doing what we can to detect and remove false narratives, but on reflection, it may not actually make any major difference.
[“source=socialmediatoday”]