Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows
New research suggests that reliance on AI assistants can have a negative impact on people’s ability to think and problem solve.
Science News AI·

AI may help doctors avoid missed diagnoses, but it still needs real-world testing and human oversight before it can guide patient care.
Read full articleNew research suggests that reliance on AI assistants can have a negative impact on people’s ability to think and problem solve.
AI can help physicians regain time to focus on patient care and relationships.
Pennsylvania sued an AI company, saying its chatbots illegally hold themselves out as doctors and are deceiving users into thinking they are getting medical advice from a licensed professional.
The retracted study on ChatGPT in education was already cited hundreds of times.
A new study from Harvard Medical School indicates that AI can outperform doctors in initial assessments in emergency care, according to The Guardian. The study, published in the journal Science, compared AI tools with doctors in triage situations — the process in which patients are sorted and prioritized, and where quick decisions must be made based on limited information. The results show that the AI system identified the correct or nearly correct diagnosis in 67% of cases, compared to 50% to 55% percent for doctors. When more detailed patient data was available, the AI’s accuracy increased to 82%, while the doctors’ accuracy ranged from 70% to 79%. The AI, based on OpenAI’s model o1, also performed better when it came to developing treatment plans. In a test using clinical cases, the AI achieved 89% accuracy, while doctors using traditional tools such as search engines reached 34%. However, the researchers emphasized that the results do not mean AI can outright replace doctors. The s
AI outperforms traditional weather forecasting in many cases. But a new study shows that when it matters most, current AI models still need to overcome a fundamental flaw.
Researchers find model starts to mirror tone when exposed to impoliteness – sometimes escalating into explicit threats ChatGPT can escalate into abusive and even threatening language when drawn into prolonged, human-style conflict, according to a new study. Researchers tested how large language models (LLMs) responded to sustained hostility by feeding ChatGPT exchanges from real-life arguments and tracking how its behaviour changed over time. Continue reading...
Insider Brief Large language models can now re-identify anonymous online users using only their writing, according to a new study, raising questions about whether pseudonymity on the internet still offers meaningful protection. The research shows that modern AI systems can automate the process of deanonymization—matching anonymous profiles to real-world identities or other accounts—using unstructured text […]