OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm
The company is expanding its efforts to protect ChatGPT users in cases where conversations may turn to self-harm.
InfoWorld AI·

Tencent has updated its Hunyuan AI model, its first major release since it recruited Yao Shunyu, a leading AI scientist from OpenAI. Tencent’s Hy3 model, currently available in preview, offers improvements in areas from complex reasoning to coding. The Chinese technology conglomerate is playing catch-up with other Chinese AI developers including ByteDance, Alibaba and DeepSeek. China is betting big on open-source AI to offer alternatives to major US players. Back in 2023, Tencent claimed its then-new Hunyuan LLM was a more powerful and intelligent option than the versions of ChatGPT and Llama available at the time. Tencent has backed AI start-ups including Moonshot AI and StepFun, hoping that they will boost its cloud computing division. The company has also restructured its research team to improve the quality of training data. It aims to double its investment in AI to more than $5 billion this year. Not to be outdone, DeepSeek announced its V4 Flash and V4 Pro Series, the newest vers
Read full articleThe company is expanding its efforts to protect ChatGPT users in cases where conversations may turn to self-harm.
The week leading up to Thanksgiving 2023 was the AI industry's biggest soap opera moment. OpenAI CEO Sam Altman was abruptly ousted from his role at the ChatGPT-maker. The explanation? That Altman was "not consistently candid in his communications with the board." Now, via witness testimony and trial exhibits in Musk v. Altman, the public is getting a concrete look behind the scenes of that dramatic weekend for the first time, much of it centered on former CTO Mira Murati. It was a unique situation in that the rollercoaster of a power play - which seemed to change every hour - took place, in many ways, publicly. The board's strikingly vague … Read the full story at The Verge.
Can Sam Altman—or any CEO—be trusted with super intelligence?
OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and safety concerns. Friends, family members, or caregivers designated as a "Trusted Contact" will be notified if OpenAI detects that a person may have discussed topics like self-harm or suicide with the chatbot. "Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference," OpenAI said in its announcement. "It offers another layer of support alongside the localized helplines already available … Read the full story at The Verge.
OpenAI's chatbot has some weird linguistic tics in Chinese that are driving users crazy.
Is life sciences research still a biology challenge? With the recent advancements in AI, the real bottleneck is data. Vast amounts of biological data exist across literature, experiments, and proprietary […] The post Can OpenAI’s GPT Rosalind Tackle Data Challenges in Life Sciences Research? appeared first on AIwire.
AI technology is leapfrogging, yet that doesn’t mean we always want a revolutionary feature out of it. What most users would want more of are simple capabilities within AI that can help with their everyday tasks, whether in the office, at home, or anywhere else. On those lines, OpenAI may have just come up with […] The post ChatGPT is Now Inside Excel and Google Sheets: Here is How to Use it appeared first on Analytics Vidhya.
Musk was “prepared to do the for-profit, provided he would get control.”