AI researchers want AI to fake “thinking”
They want deliberately slow chatbot responses to make people trust the answer more. And it makes me trust the researchers less.
AI is getting faster. But slow-responding AI is perceived as better by users.
At least that’s the conclusion reached with new research presented at CHI’26, which is the Association for Computing Machinery’s Barcelona conference on Human Factors in Computing Systems.
Researchers named Felicia Fang-Yi Tan and Professor Oded Nov at the NYU Tandon School of Engineering tested 240 adults by having them use an AI chatbot. The answers were artificially delayed by 2, 9, or 20 seconds, and the delay had nothing to do with the question or the answer.
Afterwards, the researchers asked how they liked the answers. In general, participants preferred the answers that took longer (although sometimes users got frustrated with the 20-second delay).
Why? Because a delay indicated that the AI was “thinking” or showing “deliberation.”
This was an interesting result, and the research is valuable input for AI companies.
What it really means
In almost every product category, faster means better. But for AI chatbots, apparently, a delay makes people assume the results are better.
In other words, unlike other products, people judge AI the way they judge people. If people give a slower response, we tend to assume it’s a more thoughtful one.
In still other words, study participants believed something that wasn’t true.
Problematically, the researchers advise AI developers to implement “Context-Aware Latency” by abandoning a one-size-fits-all approach, using latency as a “tunable design variable.” Researchers say simple questions should get a quick answer. But more complex questions, including moral dilemmas, should trigger delays to match the request’s gravity.
They call it “positive friction.”
The researchers say it would be a good practice to trick users into believing the AI chatbot is considering their answer more than it really is because users will be happier in their delusion that AI is like people, who need more time to mull over serious questions.
In fairness, the researchers do warn that if users equate longer response times with higher quality, they might place undue trust in a slower system. That’s a good warning.
But by entertaining the recommendation that AI chatbot designers should build in artificial delays so users falsely believe the AI is thinking harder, the researchers lose the plot. This is yet another example of AI researchers and developers being comfortable with or indifferent about user delusions when it comes to AI.
Where’s the impulse to inform? Why don’t researchers see this phenomenon as a teachable moment whereby people can be taught that AI is not “thinking,” and is not like a person?
The trouble with AI anthropomorphism
It’s true that software developers engage in user interface optimization, that includes loading animations, progress bars and confirmation dialogs.
It’s also true that manipulative online services, like background checkers and people finders, use fabricated, drawn-out progress bars to build perceived value and exploit the sunk cost fallacy so you’re more likely to pay for a report you thought was free.
But AI is different. Nobody believes these other services are “thinking” (or, for that matter, experiencing consciousness or emotions). But many people increasingly think AI does. And it’s a growing and dangerous trend.
Treating chatbots as sentient beings allows tech companies to take the attention economy to the next level — the “attachment economy” — making users emotionally attached to their products, despite the potential harms that I’ve talked about so many times in this space.
That’s why researchers who discover specific delusions people have about AI chatbots having thoughts, feelings or consciousness should feel compelled to educate the public about what’s actually true, rather than reinforce their delusions.
In this case, if chatbot response times were more random and not connected to the importance of a question, people might intuitively learn that AI doesn’t “think” like people do.
But the idea of tricking users with deliberate context-aware latency and “positive friction” is unethical.
If people think AI is a tool, they’ll want the fastest response possible. If they believe AI is a kind of person, they’ll assume a slower response is better.
AI is a tool, not a person. Let’s just make the answers as fast as possible.
You’re reading the free version of Machine Society. The paid version, which costs $5 per month or $50 per year, has full content. If you can, please support independent journalism in general, and this independent journalist in particular, by becoming a paid subscriber!
More from Mike
NEW THIS WEEK:
Your AI strategy is all wrong
Superintelligent podcast: The Bullied Kids Who Became Billionaires
READ, LISTEN, FOLLOW, & SUBSCRIBE:
Machine Society, The Attachment Economy, Computerworld, Superintelligent, TWiT, blog, The Gastronomad Experience, Book, Gastronomad on Surf Social, Bluesky, Reddit, Notes, Mastodon, Threads, X, Instagram, Flickr, Facebook, and Linkedin!





