We interrupt your regularly scheduled newsletter for an update.
You can argue for different tipping points, but from my perspective, the watershed moment for content moderation on Twitter was the 2013 Westgate Mall attack. Al Shabab was live-tweeting the event. A previous al Shabab account had been terminated for promoting terrorism, but what unfolded that weekend was different.
Twitter suspended Shabab’s official account, @HSMPress. Al Shabab opened another account. With the generous and principled support of my new wife, I was dipping in and out of my wedding party to track each new account and noisily demand that Twitter do something. (I was not the only one, by any means, but come on, you have to admire my commitment.)
Twitter knocked down the new account. Al Shabab opened a new account.
Rinse and repeat.
Some opined, as some always do, that this was just “whack-a-mole,” a pointless activity that accomplished nothing, because the accounts kept coming back. But I had the goods. Each iteration of the al Shabab account had fewer followers, and a bigger percentage of those followers were counterterrorism analysts rather than fans.
Over the next several years, I and others relentlessly showed the evidence for the effectiveness of persistent suspensions in reducing reach and discouraging use of the platform over time, first among jihadist terrorists and later among all manner of hateful, violent extremists and disinformation peddlers. The whack-a-molers were, in the end, whacked, as evidence accrued that persistent content moderation worked.
Alas, the days of whacking moles on Twitter are over. Instead, the moles are being welcomed back with a warm hug and an earnest apology.
Here’s what we know about the “amnesty” reinstatements so far:
They’re happening at scale. Follow @travisbrown for a great, robust list of reinstated accounts. He’s observed what must now be well over 12,000 restored accounts, with a heavy far-right focus. The main issue highlighted by the scale of activity is that Musk’s promised “manual reviews” of accounts are about as credible as his earlier promise that an independent committee would review reinstatements before they happened.
I’m approaching this with a different methodology, by observing large clusters of known suspended accounts from different periods. This approach is slower to populate lists, but it’s useful for observing scale. Even though an objectively large number of accounts have been restored, they’re still a fractional percentage of the total number of accounts that have previously been suspended. I’d like to tell you that’s a hopeful sign, but it probably means that much, much worse is yet to come.
This brings us to the question of how accounts are being selected for reinstatement. At least some reinstated users have reported that their “unsuspension” was the result of a new appeal, even when previous appeals had been denied. Some said they had filed their appeals after hearing about Musk’s plans for the platform. Other reinstatements were less clear, and based on what I’ve seen so far, I’m fairly sure other methods are in play.
For instance, an extremely old cluster of Russian bots just came back online. Perhaps you’re old enough to remember when they had numbers for their handles. That’s how old this cluster is. Most of the accounts haven’t tweeted for years and years, and none of them have resumed tweeting so far. Doesn’t seem like it would be an appeal situation. Could be some kind of fluke? After all, if there’s one thing we know, it’s that Elon Musk hates bots <eye roll>.
The most activity I’ve seen in my monitoring so far comes from accounts suspended in the massive QAnon purge that took place in and around January 2021. Many “final tweets” from this dataset are celebrating the January 6 insurrection. Some older alt-right accounts have also resurfaced. A few reinstated accounts are attributed to a network identified through COVID-denial content, but with substantial and obvious overlap with the far right. At any rate, COVID-suspension reinstatements seem destined to be a growth sector.
I just expanded my white supremacist/alt-right monitor to include about 18,000 suspended or self-deleted accounts. Many, but not all, of these accounts have been dormant for quite some time. We’ll see what happens.
One question I raised in my last newsletter was whether these users would want to return. So far, the answer seems to be yes. A lot of the reinstated accounts are tweeting at high volumes, with many harassing @ replies addressed to people they hate, and they’re generally picking up right where they left off. We’ll see how this holds up, but I am less optimistic about this than I was prior to seeing data.
There are at least hundreds of thousands of potential reinstatements in the pipeline, likely millions. Many of these suspensions happened gradually and sporadically. The reinstatements look like they’re going to happen steadily, and if anything, the pace seems likely to accelerate, until one of the various swords of Damocles falls on the company’s head. They can’t come too soon.
Each restored account is another chip in what remains of the facade of civil discourse on Twitter. The outpouring of hate is horrific simply as a matter of content, but it’s going to start crowding out healthy communities and suppressing constructive voices.
Whacking the moles helped shrink online communities devoted to hate and destruction. Hugging the moles is not only rebuilding those communities, it’s re-energizing and empowering them, egged on by the destructive manchild at the top.
I think this is a great point: that with each new iteration, the accounts (Al Shabaab) had less and less reach. This is probably a good metric for effectiveness / outcomes of this approach. It's unrealistic to expect a perfect ecosystem devoid of terrorist content....