Internet Nerd Compendium

Internet Nerd Compendium

internet // philosophy // politics // sociology

How to moderate hate

Shadow banning, labelling and removal – how and why social media platforms remove content from their platforms.

In his work, Governance of and by platforms, Tartleton Gillespie examines the now much referred to Section 230, and how social media platforms moderate the content of their users. Section 230 is an American piece of legislation, designed to offer a safe harbour for social media platforms. The legislation limits to what degree social media platforms, or other content hosting services such as YouTube, are liable for the content of their users. In recent months there has been much discussion about whether the legislation needs to be revoked – mostly by pro-Trump right-wing pundits.

But how did we get here? The culture of social media moderation, Gillespie suggests, has arisen front the early internet message boards, where rules were tinkered with and adapted by the community as they grew. As Gilespie points out, policies are still written in an ad hoc and responsive way to surges in behaviour on the giant social media platforms’ sites – this is sometimes perceived by users as the rules being unevenly applied. Gillespie notes that having active moderators, as seen on Reddit boards, would be too unwieldy due to the sheer scale of open social media platforms. Instead he describes the multi-layered approach of governance that platforms such as LinkedIn, Twitter and Facebook have relied on – in particular, he focuses on the onus placed on users to report or flag egregious content. Simply put – the platforms rely on the free labour provided by users to highlight dangerous and harmful content.

In her work, Commercial Content Moderation: Digital Laborers’ Dirty Work, Sarah Roberts examines the role of the commercial content moderation (CCM) workers, who must decide on the consequences for the reported content and the user who posted it. Facebook was recently sued by its content moderation team for failing to adequately protect them from the disturbing content they were being asked to review. Facebook eventually agreed to a $52 million settlement payment to moderators who had suffered from PTSD from the type of content they were reviewing day in, day out. This is the type of content we are all swimming in, however – especially marginalised groups who are brigaded and dog piled by hate groups. What are the consequences for the mental health of users, who spend hours reporting the content in the first place? Amnesty International looked at the harassment and abuse faced by women on social media – and the results are startling. Yet there remains, on Twitter especially, a cultural taboo around blocking or muting users. I’ve personally been policed, mostly by white cis men, for my choice to block those who make me feel unsafe. I block for my own safety – from neo-nazis, transphobes and misogynists who would target me, both online and in real life, for being outspoken about my beliefs on Twitter. I also block accounts who stalk, insult or send pornographic images to me. But to some cis male trolls, blocking is reprehensible and reporting is out of line. But should being online be this hard? Why is it that a neo-nazi knows they can create a burner account on Twitter and post violent racist imagery for a week or two before enough users report them and they are forced off the site and on to another burner account?

Both Gillespie and Roberts point our that platforms are attempting to strike a balance – to clamp down just enough on the content that will drive away users or put advertisers off but not so much that they lose profits. Profits are placed well above public safety or social justice. Following the 2020 US election, Trump and his followers began to debate the removal of section 230, claiming that by removing right-wing accounts that advocated against democracy they were acting as publishers and not platforms. We can see from Gillespie and Roberts work that the approach was entirely in-keeping with the past approach of the platforms. After the attempted coup on January 6, anti-democratic users fomenting violence were alarming advertisers and risking the platforms’ profits. The irony is that this legislation, as Gillespie points out, provides social media platforms with some protection from liability for content posted on their sites. If they were made liable for the planned violence and hate speech prevalent on platforms such as Facebook and Twitter, platforms would likely police users and their content much more strictly than they currently do, where users take most of the liability for what they post.

While there remains sharply polarised opinions on how to deal with prejudice, violence and harassment on social media platforms – with some proposing all content should be allowed and others lobbying social companies to act more decisively against those that use their platforms for hate – almost everyone can agree that the current system is simply not working. The ad hoc approach as described by Gillespie may not be enough for platforms to avoid external regulation.

Got a minute? Sign this petition to try and get Twitter to take transphobia seriously on it’s platform. Block as often as you like and report hate where you see it. Oh and consider deleting Facebook – I did four months ago, and I haven’t looked back.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel