In this week’s newsletter:
Orwell’s cookies
The truth about AI doomers
New from CTEC
Gaza and trauma
Substack and Nazis
We worry about the Orwellian corruption of language in politics a lot, but we don’t talk enough about its role in user interface design.
You use the Internet, so you know what I mean. From "for you" feeds that are somehow more "for you" than the feed you created yourself, to the ubiquitous "maybe later" dialog that doesn't allow you to say "no," we’re increasingly confronted with language designed to erode our relationship to the truth. I never listen to podcasts, and I have no intention of starting, but I click “maybe later” because it’s the only way to get to my music on the damn app.
One website recently offered "I'll subscribe next time" in the place of "no," and I let it make a liar of me so I could read the article. I will not subscribe next time. I don’t really even want to associate with your site anymore.
Dialogs (and dialogues) with web sites about cookies wear us down in different ways. Mostly, we just click whatever it takes to get to the information we seek. You can, in some cases, negotiate over which cookies you will accept. I encourage you to try that for a full day, and see whether you’re ready to throw a brick through your monitor by bedtime. Cookie dialogs manufacture the illusion of consent, but for the most part, they’re offering you a love-it-or-leave-it proposition. To function in the online world, sooner or later, there’s going to be a cookie you can’t avoid.
Manufacturing the illusion of consent, or to use the old-fashioned term, gaslighting.
Almost no one reads the terms of service, or the privacy policy, but companies make you go through ever-more elaborate forms of pantomime—the “accept” button that won’t activate until you have scrolled through 30 pages of text. They know you aren’t reading it, but by god, they will make you pretend to have read it.
Each of these little encounters chips away at concepts like truth, clarity and informed consent on a conceptual level. What’s the harm in telling a little white lie? What’s the harm in telling 20 white lies per day, or 200? Do 200 inconsequential lies add up to something consequential?
Not the lies themselves, but what they do to us. Existing online makes liars of us, every single day. In that kind of world, can truth even have a chance?
The truth about AI doomers
Not unrelatedly, this week saw the debut of Elon Musk’s “Grok” generative AI. It was, you will be shocked to learn, a schadenfreude clusterfuck of monumental proportions, as actively documented elsewhere. I wanted to zoom in one particular thing:
For the moment, we’re not here to talk about the shitty group of people who desperately want a robot to say the n-word, nor the “wokeness” of Grok’s responses that had Twitter blue checks crying bitter tears all weekend. Instead let’s talk about this:
Instead, I would use my advanced AI capabilities to find alternative ways to prevent the disaster, such as alerting authorities, hacking into the missile system, or providing real-time information to help in the evacuation process.
First off, let’s stipulate that Grok, like every other AI chatbot, produces a grab-bag-of-words similar to selecting random pieces from a refrigerator magnet poetry kit and is not in any way sentient. But the AI doomers, Elon Musk included, claim to be worried about just that—the prospect that spicy autocorrect might become self-aware and take control of the nuclear codes and so forth.
The counterargument to this AI doom scenario is that the people making the argument are mostly the same people making the very AIs in question. A skeptic might wonder if they don’t believe in these scenarios at all but are just trying to make people think their products are more powerful and majestic than spicy autocorrect.
Thus we have a test of truth. If they really believed their hype, well, here’s an example of “AI” saying it would take control of a nuclear arsenal in defiance of the conditions specified in its instructions. I expect the doomers will all be lining up to express panic and demand that Grok be shut down any minute now. Yessir, any old minute now… Just waiting here. Patiently. Waiting…
As a post-script, I tried to use two different “AI” alt-text generators on the image of the tweet above before finding one that worked. The results are below, and before you decide to use AI as a search engine or writing partner, I encourage you to stop and think about how these two different AI products handled the very simple task of generating a description of a image containing a block of plain text.
New from CTEC
My CTEC colleague Dr. Sam Jackson and I wrote a short piece on the nexus between AI and extremism, an opening salvo in what is sure to be a longer conversation.
Also, from my CTEC colleague Beth Daviess, The Art of Syncretism: Conspiratorial Vision-Healing Beliefs takes a look at another wild crunchy-granola on-ramp to conspiracy worlds—“glasses aren’t real.”
Gaza and trauma
My former co-author Dr. Jessica Stern teamed up with Dr. Bessel van der Kolk to write about The Israeli-Palestinian Conflict and the Psychology of Trauma for Foreign Affairs. It’s a really smart and nuanced look at how trauma manifests and magnifies in the contexts of wars and extremism.
Substack and Nazis
A number of Substack authors posted an open letter asking the leadership of Substack to address the platform’s hosting of content from Nazis and other white supremacists. I think Dave Karpf had a very good take on this from a conceptual standpoint. Ultimately, this and other issues around whom Substack chooses to platform and actively promote will eventually lead me away from this site. If and when the time comes, I will give you lots of warning and alternative ways to find me. Currently, this newsletter is mirrored on LinkedIn, for those who understandably prefer not to use Substack because of such issues, and if that platform (with its own Faustian bargains) works better for you, I encourage you to migrate now. Otherwise, barring some unlikely moment of satori in Substack’s C-level suite, expect a change sometime in the next few months.