Today’s social media platforms are premised upon the concept of “post first, moderate later.” Their billions of users flood their platforms with material in real-time, with no pre-review to determine whether the content actually meets the site’s acceptable speech guidelines beyond a few small hash-based blacklists. Moderation comes only in the form of post-review in which content is flagged after it has been posted and subjected to review and potential removal. This runs contrary to nearly all other forms of traditional publication, in which content is reviewed for acceptability prior to publication. Could restoring such pre-review to the online world eliminate hate and horror and curb the digital falsehoods and foreign influence that run rampant today?
The modern social media world could not exist without the Web’s popularization of post-moderation content review. Unlike the traditional media that preceded it, the Web introduced the idea of gatekeeper- and editor-free publishing, in which anyone anywhere could write anything and publish it instantly the world, without any form of review or moderation.
Whereas the print and broadcast eras operated under strict legal regimes that made them liable for many kinds of content and thus carefully scrutinized every statement prior to publication or airing, the Web created a largely review-free free speech zone where nothing was off the table and the few rules that existed were applied only after the fact.
Those with evil intent recognized that this post-moderation model meant they could say things that violated the terms of service of platforms and get them to spread virally, reaching potentially millions of people before the platforms got around to reviewing and removing them.
In a pre-review world, each post is carefully reviewed for acceptability and compliance with a platform’s terms of service prior to publication. Every piece hate and horror speech, graphic violence, terrorism material, non-consensually shared intimate imagery, abuse and every other violation is reviewed and stopped before it can ever see the light of day.
This mirrors the process used in journalism and the academic world, in which content is carefully reviewed and edited prior to publication to ensure it complies with ethical standards and best practices, is fully sourced and well written.
The pre-review model would largely halt the spread of hate and horror online, but would impose an almost insurmountable cost on the human-centric review processes social platforms use today.
It is therefore no small wonder that social platforms were built upon the idea of post-moderation in which a small team of moderators examines a small fraction of the daily firehose crossing their platform to verify compliance and remove offending posts.
Social platforms could easily decide that combating hate and horror is worth the cost and impose human review of every single post before it goes live, but this would defeat their real-time advantage and require meaningful investments in content moderation.
After all, as a Facebook spokesperson conceded earlier this month, the company has just 15,000 community operations personnel to police a platform of more than two billion users.
Asked why Instagram does not pre-screen each uploaded image and require certain kinds of imagery like graphic violence to undergo human review prior to publication, the company declined to comment. Such algorithmic pre-review coupled with human confirmation would have prevented the Bianca Devins murder images from being shared in the first place.
Facebook is actively experimenting with the idea of algorithmic pre-review, debuting earlier this year preliminary results from its efforts to install pre-review directly in its end-to-end encrypted communications clients where it would do precisely this: examine each piece of content and prevent violating material from ever being shared, even privately.
Putting this all together, we accept the idea today of post-moderated social media as if that was the only way social media could possibly ever exist. Yet this was a conscious design decision by social platforms to place profits over safety. Such platforms could just as well have been created in the image of the journalistic and academic communities, with each post subjected to careful review prior to publication. It turns out Facebook is rushing to do precisely this to get around the protections of end-to-end encryption.
In the end, it is worth contemplating: if social media platforms had been built upon pre-review rather than post-moderation, could all of today’s social media ills have been prevented?
[“source=forbes”]