Pennsylvania sued an AI company, saying its chatbots illegally hold themselves out as doctors and are deceiving users into thinking they are getting medical advice from a licensed professional.
Richard Dawkins and chatbots | LLM meaning | Flattery battery | Dancing in PE | Maths breakthrough
The otherwise admirable Richard Dawkins should adjust the local settings of the chatbot or tell it to be less obsequious (Richard Dawkins concludes AI is conscious, even if it doesn’t know it, 6 May). Such bots are initially geared to American overenthusiasm and egregiously flattering reinforcement, but just tell them you want British attitude. They’re only simulating you know.
Brian Reffin Smith
Berlin, Germany
• With artificial intelligence bringing “large language models” into everyday use, the LLM after my name has acquired a new meaning. For 70 years I assumed that it referred to my Cambridge master of laws.
Trevor Lyttleton
London
Continue reading...
Pennsylvania has filed a lawsuit against Character.AI, alleging that one of its chatbots unlawfully impersonated a licensed psychiatrist in violation of the state’s Medical Practice Act. Governor Josh Shapiro said residents must be able to trust whether they are receiving advice from a qualified professional, particularly regarding their health. During testing by a state investigator, a Character.AI […]
According to Pennsylvania's filing, a Character AI chatbot presented itself as a licensed psychiatrist during a state investigation, and also fabricated a serial number for its state medical license.
A new study from Harvard Medical School indicates that AI can outperform doctors in initial assessments in emergency care, according to The Guardian. The study, published in the journal Science, compared AI tools with doctors in triage situations — the process in which patients are sorted and prioritized, and where quick decisions must be made based on limited information.
The results show that the AI system identified the correct or nearly correct diagnosis in 67% of cases, compared to 50% to 55% percent for doctors. When more detailed patient data was available, the AI’s accuracy increased to 82%, while the doctors’ accuracy ranged from 70% to 79%.
The AI, based on OpenAI’s model o1, also performed better when it came to developing treatment plans. In a test using clinical cases, the AI achieved 89% accuracy, while doctors using traditional tools such as search engines reached 34%.
However, the researchers emphasized that the results do not mean AI can outright replace doctors. The s
It's been almost three years since Silicon Valley started aggressively pushing large language model-based chatbots like ChatGPT as the supposedly inevitable future of everything, and there's no group that has felt the pressure quite like Gen Z.
Like with many tech trends before it, it's no surprise that young people are among the biggest adopters of AI chatbot tools. But contrary to the tales spun by tech companies like OpenAI and Google, polling data shows that Gen Z students and workers are a big part of the wider cultural backlash against AI. And even as they utilize these tools, vast swaths of young people are deeply acrimonious and eve …
Read the full story at The Verge.