For ten long years, we fought the good fight on content moderation, and by the start of 2022, it seemed like things were on a pretty reasonable track. Platform moderation was far from perfect, but most of the major players agreed they should be doing it, and they assigned resources accordingly. The platforms and their leaders were often reluctant, often whiny, but they did what they had to do to keep critics and regulators at bay. Trust and Safety departments were staffed with mostly good people who believed in the task that had fallen to them, even if their CEOs did not.
One year later, it’s not such a pretty picture. Led by Elon Musk’s Twitter, the major platforms have laid off tens of thousands of people, including vast swathes of the Trust and Safety apparatus. Meanwhile, new platforms like Post, Substack Notes and Blue Sky are launching with a wish and a prayer that their enlightened free speech ideologies will empower good engagement and discourage bad engagement.
In short, despite vast amounts of experience and data showing the necessity and effectiveness of robust content moderation, no one has learned anything.
These problems are very visible on some of the new platforms, including Substack Notes, which is linked to the engine that (for now) powers this newsletter. On Mastodon, you might do better, depending on your server, since it was created with abuse in mind, but the federated model means that there aren’t consistent rules across the entire platform, and there’s no singular authority to deal with emerging problems.
So we’re back to 2013, or thereabouts, and rebooting the five stages of content moderation. If you’ve been following me for a while, you will have seen these before, but for newcomers, the stages are:
Denial: Because our platform is inherently good, we don’t have any need for robust content moderation.
Free-speech mouth noises: OK, so despite being inherently good, we can see some problems, but our unwavering commitment to free speech precludes us from taking any robust action to solve those problems.
Bargaining: OK, so despite our unwavering commitment to free speech, we acknowledge that we still have a problem, a big one, and we need to pacify users/activists/regulators/shareholders. So while we’re still not engaging in any meaningful moderation, we agree something more needs to be done, and we’re considering many options, so please leave us alone in the meantime.
Inadequate token-effort: We’ve taken down all the beheading videos, and from now on, we’ll take beheading videos almost as seriously as we take DMCA violations. Everything else remains the same. You’re welcome.
Acceptance: ALL RIGHT ALREADY. We acknowledge we have a problem, that we are part of society, and society has rules, and we’re going to make a semi-credible effort to moderate content in such a way that most of these bad headlines finally go away—which everyone knows is the most important thing.
If this sounds like an unpleasant process for everyone involved, especially users, you’re right. But remember, this is the same American capitalist system that can’t even comprehensively ban asbestos. Except that in Silicon Valley, the problem is worsened by the 2020s iteration of the masters of the universe attitude. Tech founders and assorted pretenders are so convinced of their brilliance that they don’t think they need to learn from anything that went before. And they sure don’t need to listen.
They’ll get schooled, eventually, most of them. While there are many nuances and pitfalls to consider in the content moderation game, everyone is playing it to some extent—for child pornography, other pornography and in service to the almighty corporate intellectual property leviathan. For sites seeking support from VCs, or planning to sell shares in a public offering, or hoping to be available through the Apple and Google app stores, those rules are the floor, not the ceiling.
A little bit of historical perspective could save these companies, their users, and society in general, a lot of time and trouble. But no, we’re still in the era of “move fast and break things”—specifically things like user safety and democracy.
Unfortunately, those are load-bearing walls.
I so agree. And I loved the close.
I started working as a social media consultant (well, blog consultant, because social media wasn't a thing then) in 2004. I spent a lot of time over the following years trying to interest key players in the social media world to up their content moderation game, to no avail whatsoever. My exhortations were ignored by venerable institutions such as the BBC and Google – I gave talks about the human side of social media and the need to nip community problems in the bud, but might as well have just gone out into the garden and howled for an hour, for all the good it did.
There has never been much willingness to tackle these issues, even though the major players were eventually forced to. Like me, the community experts I knew back then largely left the industry by 2014, and now we just gnash our teeth in frustration every time this same problem comes up again.
I have a note to myself to eventually write a post titled 'Substack doesn't have a Nazi problem, it has a libertarian techbro problem'. Because that is, ultimately, the root of the issue. Many social media tools were built by people who wanted to fill a void in their lives, and that void was a lack of social skills. But that sadly meant that they also lacked the skills to create a platform that can actually function in reality, because humans are messy and difficult and tricksy and the platforms' big ideas about 'free speech' will always, always hit the brick wall of Nazis, CSAM, misogynistic death/rape threats and other forms of abuse.
I just hope that enough people here continue to put pressure on Substack to push them towards Step 5, because otherwise this place really is going to turn into a Nazi bar.