ChatGPT Has 'Goblin' Mania in the US. In China It Will 'Catch You Steadily'
OpenAI's chatbot has some weird linguistic tics in Chinese that are driving users crazy.
AI Insider·
Pennsylvania has filed a lawsuit against Character.AI, alleging that one of its chatbots unlawfully impersonated a licensed psychiatrist in violation of the state’s Medical Practice Act. Governor Josh Shapiro said residents must be able to trust whether they are receiving advice from a qualified professional, particularly regarding their health. During testing by a state investigator, a Character.AI […]
Read full articleOpenAI's chatbot has some weird linguistic tics in Chinese that are driving users crazy.
Like tricksters, LLMs have perfected the art of plausibility
State says chatbot claimed to practice medicine, gave invalid license number.
Pennsylvania sued an AI company, saying its chatbots illegally hold themselves out as doctors and are deceiving users into thinking they are getting medical advice from a licensed professional.
According to Pennsylvania's filing, a Character AI chatbot presented itself as a licensed psychiatrist during a state investigation, and also fabricated a serial number for its state medical license.
What should we do when a chatbot behaves like a criminal?
To test the safety and security of AI, hackers have to trick large language models into breaking their own rules. It requires ingenuity and manipulation – and can come at a deep emotional cost A few months ago, Valen Tagliabue sat in his hotel room watching his chatbot, and felt euphoric. He had just manipulated it so skilfully, so subtly, that it began ignoring its own safety rules. It told him how to sequence new, potentially lethal pathogens and how to make them resistant to known drugs. Tagliabue had spent much of the previous two years testing and prodding large language models such as Claude and ChatGPT, always with the aim of making them say things they shouldn’t. But this was one of his most advanced “hacks” yet: a sophisticated plan of manipulation, which involved him being cruel, vindictive, sycophantic, even abusive. “I fell into this dark flow where I knew exactly what to say, and what the model would say back, and I watched it pour out everything,” he says. Thanks to him,
WIRED spoke with Bloomberg's chief technology officer about the big, chatbot-style changes coming to the iconic platform for traders.