The Kindle edition of my novel Optimal is on sale for just $0.99 for the next week. As we lurch into a post-Twitter world, it’s gotten a lot harder for people like me (a certain level of accomplishment but not actual fame) to promote their work, so I beseech you, dear readers, to pass the word in whatever way works for you. If you’re interested in the stuff that this newsletter is about—social media, technology run amuck, algorithms, extremism, and artificial intelligence—you will probably find something of interest in Optimal. But also, and I cannot stress this enough, it’s a pretty fun read. In the words of my favorite Amazon review:
Lawful Extremism: The Dred Scott Decision
Speaking of promotion in the post-Twitter environment, just a reminder that my latest non-fiction epic is also out and free to read. Lawful Extremism: Extremist Ideology and the Dred Scott Decision tackles an important moment in U.S. history through the lens of extremist studies, and it sets up a lot of work I hope to be doing in the months and years to come. If you haven’t read it yet, please do check it out. On a related topic, check out this award-winning dissertation that shows Black families with enslaved ancestors are worse off economically today than those whose ancestors were free.
Large language models propagate racist medicine
If you’ve been with me a while, I am sure you are not surprised by this headline. The nihilistic tech industry push to implement generative text bots—or “artificial intelligence” as they inaccurately describe it—in every facet of our lives, including most worryingly, medicine. If you think there’s a hell of a milkshake duck brewing here, well, you’re not wrong. Fortune, not exactly a bleeding heart publication, has a refreshingly blunt summary of the paper in a plain English: Bombshell Stanford study finds ChatGPT and Google’s Bard answer medical questions with racist, debunked theories that harm Black patients. Try not to let it raise your blood pressure, as you might end up getting treated in the emergency room by spicy autocorrect.
If you’re confused by the blizzard of competing claims about gender-affirming care (perhaps you work at the New York Times), this is a great rundown of some of the most important types of misinformation on the subject—including oversimplification, outright misinterpretation, and false equivalences. For those of you interested in extremism, this article will stead you well over a number of topics. As the authors point out: “All of these tactics come straight from the same playbook used to defend scientific racism, sexism, homophobia, and ableism.”
Despite what some people would have you think, we have extensive credible documentation of Russian social media propaganda, identified and published by the technology companies themselves, who would have greatly preferred to pretend this wasn’t a real thing. This paper explores how anti-Muslim propaganda and White identity are represented in a verified dataset.