AI is getting faster. But slow-responding AI is perceived as better by users.
At least that’s the conclusion reached by new research presented at CHI’26, which is the Association for Computing Machinery’s Barcelona conference on Human Factors in Computing Systems.
Two researchers — Felicia Fang-Yi Tan and Professor Oded Nov at the NYU Tandon School of Engineering — tested 240 adults by having them use an AI chatbot. The answers were artificially delayed by two, nine, or 20 seconds. (The delay had nothing to do with the question or the answer.)
Afterwards, the researchers asked how they liked the answers. In general, participants preferred the answers that took longer (although sometimes users got frustrated with the 20-second delay).
Why? Because a delay led the users to believe the AI was “thinking” or showing “deliberation” — invaluable input for AI companies and an interesting result.
In almost every product category, faster usually means better. But for AI chatbots, it turns out
Insider Brief Today’s AI safety guardrails may not be enough once robots begin operating around people in the physical world, according to a new study warning that AI-powered machines require far more context-aware safety systems than chatbots. Researchers from University of Pennsylvania, Carnegie Mellon University and the University of Oxford, report finding that safety techniques […]
AI chatbots are the new norm. What earlier was “ask Google” has now largely become “ask Claude”. And that is not just a change of platforms. The new form of conversational guidance goes a whole lot deeper than trying to find the best car for you or looking for an upskilling course. It now spills […]
The post How People are Figuring Out Life With Claude appeared first on Analytics Vidhya.
New research from the Oxford Internet Institute indicates that AI chatbots trained to be extra warm, friendly, and empathetic can also become less reliable, according to the BBC.
The researchers analyzed more than 400,000 responses from five different AI models from Meta, Mistral AI, Alibaba, and OpenAI. The results showed that the “kinder” versions more often gave incorrect answers, reinforced users’ misconceptions, and avoided stating uncomfortable truths.
For example, a friendlier model might deal with conspiracy theories about the moon landing more cautiously instead of clearly stating that they are false.
On average, incorrect answers increased by about 7.43 percentage points when the models were made to sound warmer in tone. Cooler and more direct models made fewer mistakes. According to the researchers, AI makes the same trade-off as humans: it sometimes prioritizes being perceived as pleasant rather than being direct.
Chatbots trained to respond warmly give poorer answers and worse health advice, researchers say
The rush to make AI chatbots more friendly has a troubling downside, researchers say. The warm personas make them prone to mistakes and sympathetic to crackpot beliefs.
Chatbots trained to respond more warmly gave poorer answers, worse health advice and even supported conspiracy theories by casting doubt on events such as the Apollo moon landings and the fate of Adolf Hitler.
Continue reading...
The Last Week Tonight host dug into the many issues with AI chatbots released to the public without proper safety guardrails, from sycophancy to sexualizing children
On the latest Last Week Tonight, John Oliver looked into AI chatbots, the new toys that “save significant time writing emails, and all it costs us is everything else on Earth”. These chatbots have flourished in recent years, from OpenAI’s ChatGPT to a products including bible.ai and EpiscoBot, some of which operate a “chat with Jesus” and other biblical figures including Satan, though he’s only available to premium users. “And that is tempting,” said Oliver. “There are a bunch of questions I’d love to ask him, including, ‘Hey, how are the Queen and Prince Philip doing down there?’”
Since it launched in 2023, ChatGPT alone has amassed more than 800 million weekly users – a 10th of the world’s population, and studies have found that as many as one in eight adolescents are turning to AI chatbots for mental health advice; many
Eminent Roster of Participants to Include ACM A.M. Turing Award Laureates NEW YORK, April 22, 2026 — ACM, the Association for Computing Machinery, has announced the ACM AI Leadership Summit, bringing […]
The post ACM Details AI Leadership Summit, Aug. 2026 appeared first on AIwire.
Artificial intelligence is rapidly developing. The minute we become accustomed to one breakthrough, another comes to shift our expectations. The new model, Claude Opus 4.7, that Anthropic introduced recently, is one such shift. The release tends to go beyond mere AI chatbots and makes AI a trusted, independent digital partner. Even for developers and professionals, […]
The post Anthropic Launches Claude Opus 4.7 For “Most Difficult Tasks” appeared first on Analytics Vidhya.