OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm
The company is expanding its efforts to protect ChatGPT users in cases where conversations may turn to self-harm.
The New York Times AI·
Mr. Musk’s lawsuit against Mr. Altman and OpenAI makes the case that all-encompassing greed is Silicon Valley’s defining feature.
Read full articleThe company is expanding its efforts to protect ChatGPT users in cases where conversations may turn to self-harm.
The week leading up to Thanksgiving 2023 was the AI industry's biggest soap opera moment. OpenAI CEO Sam Altman was abruptly ousted from his role at the ChatGPT-maker. The explanation? That Altman was "not consistently candid in his communications with the board." Now, via witness testimony and trial exhibits in Musk v. Altman, the public is getting a concrete look behind the scenes of that dramatic weekend for the first time, much of it centered on former CTO Mira Murati. It was a unique situation in that the rollercoaster of a power play - which seemed to change every hour - took place, in many ways, publicly. The board's strikingly vague … Read the full story at The Verge.
Elon Musk's plans to get into the AI chip manufacturing business are going to be costly. As the New York Times and CNBC report, SpaceX is planning to invest at least $55 billion into its "Terafab" chip plant in Austin, Texas. That's according to the details of a public hearing notice filed in Grimes County, Texas, for a meeting to request tax breaks for the project. The company says that if additional phases are constructed, its investment could someday balloon to $119 billion total. When Musk initially announced the project in March, he shared ambitious plans for it to produce enough chips to support up to 200 gigawatts per year of computi … Read the full story at The Verge.
Can Sam Altman—or any CEO—be trusted with super intelligence?
OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and safety concerns. Friends, family members, or caregivers designated as a "Trusted Contact" will be notified if OpenAI detects that a person may have discussed topics like self-harm or suicide with the chatbot. "Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference," OpenAI said in its announcement. "It offers another layer of support alongside the localized helplines already available … Read the full story at The Verge.
OpenAI's chatbot has some weird linguistic tics in Chinese that are driving users crazy.
Anthropic says it’ll use all the AI compute capacity from SpaceX’s ‘Colossus 1’ data facility in Memphis.
The rocket company’s new semiconductor factory, called Terafab, is part of the billionaire’s increasing efforts to dominate artificial intelligence.