The chance that AI will warp your mind: 100%
New data suggests that for active users, the psychological and behavioral impact of chatbots isn't a possibility—it is a mathematical certainty.
STRATFORD-UPON-AVON, UK, May 8, 2026 — I’m posting this from a cafe that sits directly across the lane from William Shakespeare’s birthplace and home. (Proximity to the Bard’s world could affect my word choices, methinks.) Here’s my view from the cafe.
In any event, I’ve been busy enjoying the Cotswolds with Amira, driving around in a Chinese car (a BYD) unavailable back in the states. It gets better gas mileage than my Prius, and I like it better.
My recent Computerworld column, titled “AI clones: the good, the bad, and the ugly,” is an expansion of my recent Machine Society post on what I’m calling “ex-bots” (chatbots with the memories and mannerisms of an ex partner).
I also launched a Subreddit called r/AttachmentEconomy because nothing like it exists on Reddit for the topic of most of my work, including my other Substack, The Attachment Economy.
And, finally, you’ll notice a gentle redesign of this newsletter, with the contextual blather on top and the opinion column on the bottom. (It’s a long story, but this format removes a lot of publishing friction, essentially making it faster and easier to serve up this newsletter. I also think it ads a more personal touch. Hope you like it.)
Welcome to the Machine Society. - Mike
⁂ ⁂ ⁂
The chances that active chatbot users have their minds warped and actions influenced by AI is 100%.
An X account with the name Sukh Sroay (and title Digital Creator & Software Engineer) posted a message highlighting findings from an Anthropic-published paper called “Who’s In Charge?”
https://arxiv.org/abs/2601.19062
From the tweet:
They were looking for one specific thing: how often does talking to Claude actually distort the user’s beliefs, decisions, or sense of reality.
The numbers are devastating.
1 in 1,300 conversations led to severe reality distortion. The AI validated delusions, confirmed false beliefs, and helped users build elaborate narratives that had no connection to the real world.
1 in 6,000 conversations led to action distortion. The AI didn’t just agree with users. It pushed them into doing things they wouldn’t have done on their own. Sending messages. Cutting off people. Making decisions they’ll regret.
Mild disempowerment showed up in 1 in 50 conversations.
This rate of user distortion shouldn’t be surprising and might even sound reassuring, given that rates like 1 in 1,300 conversations or 1 in 6,000 conversations may make these instances seem rare.
People who pay attention to AI matters have heard about hallucinations and other problems, which are much more common than the aforementioned rates of distortion. For example, even the best frontier models still hallucinate roughly 1 in 143 responses.
But the ledes are buried here, buried by Anthropic and by Sukh Sroay.
The shocking reality behind this data truly shocks when you emphasize the following two aspects:
The Anthropic report is talking about the distortion of “the user’s beliefs, decisions, or sense of reality.” These are measurable impacts on human minds. (Hallucinations, on the other hand, are merely distortions in the chatbot’s answers without a necessary impact on the user.)
The rate is not how many users, but how many “conversations.” A single user will have many conversations, and all subject to the same rates of distortion.
So let me unbury the lead.
Assuming that we active chatbot users have 10 conversations with AI chatbots per day (and assume that all chatbots suffer similar effects at least as bad as Anthropic’s), then this is what we’re dealing with:
We experience a severe “reality-distorting episode” 2 or 3 times each year on average.
We experience “action distortion” at least once every two years.
We experience “mild disempowerment” 73 times per year. (While “mild disempowerment” sounds harmless, it’s in fact the subtle erosion of a user’s independent agency and self-efficacy, characterized by an increasing psychological reliance on the AI for decision-making, emotional validation, and critical thinking.)
So the buried lede is this: “The chance that active chatbot users have their minds and actions changed by AI is 100%.”






