I missed last week’s newsletter because I was going to and fro in the earth, and walking up and down in it, but I’m back, and I’m on Bluesky.
Reader, I recently got a chance to check out the hot new social app proving once again that value correlates to scarcity. It’s as fun and interesting as people say it is (way more fun than, ahem, Notes), but that assessment comes with a mountain of caveats.
First and foremost, it’s small, admitting more users on an invitation-only basis (thus the scarcity). While a few trolls have crept in, most users have good intentions. Not unrelatedly, most users hail from fairly adjacent Twitter networks, meaning a lot of us know each other already. Also not unrelatedly, the platform is far more fun than function. For instance, I definitely wouldn’t use it to search for information during a breaking news crisis.
Many beta users have gushed about the positive shitposting vibe, which is real, but that’s not likely to last beyond its public launch. Which brings us to the second issue: Moderation, or lack thereof. Since Bluesky is based on a federated model, there won’t be any universal moderation, although the developers offer more-granular content filtering than most of its peers. You can filter content by category (sexual, violent, hate, spam, impersonation), with options to hide, warn or show.
In theory, the expanded controls might help mitigate Bluesky’s otherwise lax content philosophy, currently stuck on free-speech-mouth-noises as a formal position. Ultimately, the current Golden Age of Skeeting is based on the beta test’s development of a seriously gated community. It’s not exactly gated along ideological lines (George Santos became the first indicted skeeter this week), but on friend-of-a-friend lines. Even with this guardrail, Nazis and assorted other bad actors have already made runs on the network. Despite the free-speech mouth noises, some of these have been deplatformed. When Bluesky opens to the public, well, I guess we’ll see how it goes.
This brings me full circle to my last newsletter on the Five Stages of Content Moderation, which provoked some discussion here and on Mastodon. One reasonable question raised by several people was what I mean by saying content moderation “works.” The definition of success is a big deal in this context.
Some people, not unreasonably, think that content moderation is meant to eradicate hate and extremism. It’s not. The primary function of content moderation is to make one’s platform safe and enjoyable for users.
As someone who closely tracked the presence of ISIS on Twitter from its earliest days to its near-demise, I can tell you that content moderation is overwhelmingly effective at removing the content being moderated. There’s no such thing as a 100 percent solution, but if a platform decides to ban certain kinds of content, they have the power to decimate that content’s reach and reduce the longevity of posts containing the content on their platform.
At the height of moderation, ISIS propaganda on Twitter was typically removed within 24 hours of posting, and often within two hours. Traditionally, or at least until Musk’s Twitter, moderation has also kept major platforms largely (but not 100 percent) free of DCMA copyright violations and child pornography. All of these efforts require constant vigilance, but vigilance pays off. Moderation works.
Free speech advocates don’t harp on these issues, perhaps because they’re not as obviously political to everyone, but speechers get more rigorous when we start talking about Nazis. Some of these objections are sincere; others are not. In both cases, objectors usually show a much higher tolerance for White extremists than non-White extremists. Objections to moderating ISIS were largely vibe-based and based on incorrect assertions or assumptions that content moderation “doesn’t work” and is simply a game of “whack-a-mole.”
Almost no one argued, in the West or in the Muslim world, that ISIS should enjoy an unmolested place in the public square because “free speech.” But plenty of people in the White Western world are happy to argue we should extend that courtesy to Nazis.
If your goal is to contain or eradicate a type of behavior (child exploitation, ideological extremism, or fascism), that’s undeniably complicated, and simply deplatforming a movement won’t usually eradicate it.
Luckily, a more realistic goal is available: Content moderation is very successful at limiting how bad actors weaponize your platform.
Companies that own platforms also own significant moral hazard pertaining to how people exploit its unique environment, particularly as it pertains to recommendation engines and algorithms, which clearly distinguish social platforms from, say, telephone companies. Bluesky will have to deal with this eventually, as every other online community has. Perhaps moreso, since the AT Protocol behind the start-up will let people build custom algorithms, which seems like a disaster-in-waiting to me.
There are downsides to moderation, and I don’t want to undersell them. One downside is that people who have been deplatformed may be further radicalized by the deplatforming itself. I mentioned this possibility in my early research, and some of my colleagues have examined the question even more closely. In the case of ISIS, it was pretty easy to argue that the downside of deplatforming outweighed the upside. ISIS was a numerically small, geographically dispersed movement, whose adherents were unusually dependent on social media as a vector for propaganda, fundraising and recruitment. Knocking ISIS off of the biggest platforms was therefore disproportionately effective in limiting the spread of its ideology.
The question of deplatforming right-wing extremists is much stickier in the United States, because it’s a numbers game. For all that right-wing provocateurs have attempted to leverage ISIS to smear the whole Muslim world, the truth is that ISIS never enjoyed widespread support from Muslims. Aside from some small geographical clusters, no large community or legitimate government supported ISIS.
In contrast, a very large minority of Americans overtly support various kinds of extremism, and an even larger minority supports Donald Trump, who has steadily amplified (or at least resonated with) his constituency’s most extreme elements. The borders may be fuzzy, but by any measure, right-wing extremism within the United States is exponentially bigger than ISIS within the United States—we’re talking at least tens of millions compared to hundreds, and honestly, the numbers don’t get much better in a global frame.
The adjacency of American right-wing extremism to the American political mainstream is also very uncomfortable for companies, and they have not typically covered themselves in glory when addressing toxic movements with political clout.
While the moral and legal hazard of platforming extremism remains unchanged (and remains the primary reason to do it), the social impact of deplatforming a large and cohesive movement is an open question. When Trump was deplatformed after January 6, he respawned on his own platform, Truth Social, which currently has around half a million daily users—a very small number compared to other major social media platforms, but a number that ISIS could only have dreamed of.
Truth Social is also part of a wider right-wing ecosystem. Deplatforming ISIS was an effective way of damaging the movement because deplatformed users had very limited options to reconstitute their networks. Even networks that managed to successfully respawn were forced to operate under constant threat.
Deplatforming QAnon, for instance, was much less effective at damaging the movement, because QAnon users could reconstitute their networks on a number of alternative platforms as well as through in-person meetings. If we suspect that deplatforming has a radicalizing side-effect for some users, the newly deplatformed QAnon adherents might be able to collectivize that radicalization more effectively.
We haven’t seen a lot of unequivocal evidence for that yet. The radicalizing effects of deplatforming are still an open question, research-wise. We don’t have a good window into how it works over time, and what effect it has on populations, whether large or small. We also don’t have a good window on the counterfactual—how potential for increased radicalization is balanced out by the reduction in opportunities for recruitment and proselytization.
But the moral hazard remains clear, and at the end of the day, it seems to me that this should be enough incentive. A major social media platform becomes major by offering unique tools to reach audiences and keep them engaged. In the wrong hands, those tools are weapons, and a company bears some responsibility for how those unique tools are used and how the company overtly and covertly empowers or encourages their use. Content moderation is overwhelmingly successful at moderating content, and companies should do it in order to avoid both moral and legal hazards.
The rest is up to society at large—an ongoing theme of this newsletter, and where I will leave the matter for this week.
New collected volume
The RESOLVE network recently published Researching Violent Extremism: Considerations, Reflections, and Perspectives. I contributed a chapter on the current state of extremism research. Written in 2019, so a little dated, but many of the issues I discussed are still highly pertinent. With contributions from a lot of my smartest colleagues in the VOX-Pol network and beyond, it’s well worth your time.
Read Optimal
I haven’t been flogging my debut dystopian novel a lot recently, but man, it does seem more and more relevant every day, especially with Bluesky’s aforementioned intention to create a competitive environment for algorithms. Check it out here, if you want a different path into some of these discussions. Some reviews can be found here.