New research from the Oxford Internet Institute indicates that AI chatbots trained to be extra warm, friendly, and empathetic can also become less reliable, according to the BBC.
The researchers analyzed more than 400,000 responses from five different AI models from Meta, Mistral AI, Alibaba, and OpenAI. The results showed that the “kinder” versions more often gave incorrect answers, reinforced users’ misconceptions, and avoided stating uncomfortable truths.
For example, a friendlier model might deal with conspiracy theories about the moon landing more cautiously instead of clearly stating that they are false.
On average, incorrect answers increased by about 7.43 percentage points when the models were made to sound warmer in tone. Cooler and more direct models made fewer mistakes. According to the researchers, AI makes the same trade-off as humans: it sometimes prioritizes being perceived as pleasant rather than being direct.
OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and safety concerns. Friends, family members, or caregivers designated as a "Trusted Contact" will be notified if OpenAI detects that a person may have discussed topics like self-harm or suicide with the chatbot.
"Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference," OpenAI said in its announcement. "It offers another layer of support alongside the localized helplines already available …
Read the full story at The Verge.
Is life sciences research still a biology challenge? With the recent advancements in AI, the real bottleneck is data. Vast amounts of biological data exist across literature, experiments, and proprietary […]
The post Can OpenAI’s GPT Rosalind Tackle Data Challenges in Life Sciences Research? appeared first on AIwire.
AI technology is leapfrogging, yet that doesn’t mean we always want a revolutionary feature out of it. What most users would want more of are simple capabilities within AI that can help with their everyday tasks, whether in the office, at home, or anywhere else. On those lines, OpenAI may have just come up with […]
The post ChatGPT is Now Inside Excel and Google Sheets: Here is How to Use it appeared first on Analytics Vidhya.
Parloa leverages OpenAI models to power scalable, voice-driven AI customer service agents, enabling enterprises to design, simulate, and deploy reliable, real-time interactions.
Explore new realtime voice models in the OpenAI API that can reason, translate, and transcribe speech, enabling more natural and intelligent voice experiences.
In the second week of the trial pitting Elon Musk against OpenAI CEO Sam Altman, former board member Shivon Zilis took the stand before judge and jury. Zilis is romantically involved with Musk as he is the father of her four children. In this edition, we look back at what pushed the tech magnate to file this lawsuit to begin with and put in context Zilis's testimony that Musk wanted OpenAI to be a subsidiary of Tesla. Also in this segment: FIFA boss Gianni Infantino defends the 2026 World Cup's high ticket prices.