For ten long years, we fought the good fight on content moderation, and by the start of 2022, it seemed like things were on a pretty reasonable track. Platform moderation was far from perfect, but most of the major players agreed they should be doing it, and they assigned resources accordingly. The platforms and their leaders were often reluctant, often whiny, but they did what they had to do to keep critics and regulators at bay. Trust and Safety departments were staffed with mostly good people who believed in the task that had fallen to them, even if their CEOs did not.
One year later, it’s not such a pretty picture. Led by Elon Musk’s Twitter, the major platforms have laid off tens of thousands of people, including vast swathes of the Trust and Safety apparatus. Meanwhile, new platforms like Post, Substack Notes and Blue Sky are launching with a wish and a prayer that their enlightened free speech ideologies will empower good engagement and discourage bad engagement.
In short, despite vast amounts of experience and data showing the necessity and effectiveness of robust content moderation, no one has learned anything.
These problems are very visible on some of the new platforms, including Substack Notes, which is linked to the engine that (for now) powers this newsletter. On Mastodon, you might do better, depending on your server, since it was created with abuse in mind, but the federated model means that there aren’t consistent rules across the entire platform, and there’s no singular authority to deal with emerging problems.
So we’re back to 2013, or thereabouts, and rebooting the five stages of content moderation. If you’ve been following me for a while, you will have seen these before, but for newcomers, the stages are:
Denial: Because our platform is inherently good, we don’t have any need for robust content moderation.
Free-speech mouth noises: OK, so despite being inherently good, we can see some problems, but our unwavering commitment to free speech precludes us from taking any robust action to solve those problems.
Bargaining: OK, so despite our unwavering commitment to free speech, we acknowledge that we still have a problem, a big one, and we need to pacify users/activists/regulators/shareholders. So while we’re still not engaging in any meaningful moderation, we agree something more needs to be done, and we’re considering many options, so please leave us alone in the meantime.
Inadequate token-effort: We’ve taken down all the beheading videos, and from now on, we’ll take beheading videos almost as seriously as we take DMCA violations. Everything else remains the same. You’re welcome.
Acceptance: ALL RIGHT ALREADY. We acknowledge we have a problem, that we are part of society, and society has rules, and we’re going to make a semi-credible effort to moderate content in such a way that most of these bad headlines finally go away—which everyone knows is the most important thing.
If this sounds like an unpleasant process for everyone involved, especially users, you’re right. But remember, this is the same American capitalist system that can’t even comprehensively ban asbestos. Except that in Silicon Valley, the problem is worsened by the 2020s iteration of the masters of the universe attitude. Tech founders and assorted pretenders are so convinced of their brilliance that they don’t think they need to learn from anything that went before. And they sure don’t need to listen.
They’ll get schooled, eventually, most of them. While there are many nuances and pitfalls to consider in the content moderation game, everyone is playing it to some extent—for child pornography, other pornography and in service to the almighty corporate intellectual property leviathan. For sites seeking support from VCs, or planning to sell shares in a public offering, or hoping to be available through the Apple and Google app stores, those rules are the floor, not the ceiling.
A little bit of historical perspective could save these companies, their users, and society in general, a lot of time and trouble. But no, we’re still in the era of “move fast and break things”—specifically things like user safety and democracy.
Unfortunately, those are load-bearing walls.
I so agree. And I loved the close.
My tiny sliver of research tells me that you’re an expert on extremism, and I have a lot of respect for that expertise. We may not agree on moderation specifics, but i wanted to make it clear upfront that I know you know what you’re talking about.
That said-- where do put this moderation approach in your list?
https://substack.com/profile/2-chris-best/note/c-15396859