The attack highlights the critical need for enhanced security measures in software supply chains to protect digital asset infrastructures.
The post TanStack, Mistral AI, UiPath targeted in major supply chain attack compromising 170+ packages appeared first on Crypto Briefing.
TeamPCP open-sourced Shai-Hulud today. The OIDC token extraction technique that made the TanStack attack different from every previous campaign is now a public toolkit.
The TeamPCP threat group has pulled off another big supply chain attack which within a few hours this week was able to successfully compromise 170 Node Package Manager (npm) and PyPI packages.
The attack affected the entire TanStack Router ecosystem (@tanstack) of 42 packages, a routing library hugely popular among React web application developers. Multiple other packages were also affected, including @squawk (87 packages), @uipath (66 packages), @tallyui (30 packages), @beproduct (18 packages), as well as Mistral AI’s SDK suite on both npm and PyPI, and the Guardrails AI PyPI package.
The attacks, noticed by several vendors using automated security tools, happened on May 11, spreading rapidly through package ecosystems thanks to the worm capabilities of the automated Mini Shai-Hulud malware platform, analysis found.
The exact number of package versions caught up in the attack varies depending on the source; according to Aikido Security it was 373 across 169 package namespaces, while S
An attacker poisoned 84 TanStack npm versions across 42 packages, stealing GitHub OIDC tokens and cloud keys while planting a dead-man’s switch that nukes your system. The attacker’s timing was specific. A fork, a hidden commit, a zero-diff pull request, and then nothing visible for nearly eight hours. On May 11, between 19:20 and 19:26 […]
The post The npm Package That Wipes Your Files When You Try to Stop It appeared first on Live Bitcoin News.
Mistral AI's latest release brings async cloud-based coding sessions, a new 128B flagship model, and an agentic Work mode to Le Chat — a meaningful step forward for developers building with AI agents.
The post Mistral AI Launches Remote Agents in Vibe and Mistral Medium 3.5 with 77.6% SWE-Bench Verified Score appeared first on MarkTechPost.
New research from the Oxford Internet Institute indicates that AI chatbots trained to be extra warm, friendly, and empathetic can also become less reliable, according to the BBC.
The researchers analyzed more than 400,000 responses from five different AI models from Meta, Mistral AI, Alibaba, and OpenAI. The results showed that the “kinder” versions more often gave incorrect answers, reinforced users’ misconceptions, and avoided stating uncomfortable truths.
For example, a friendlier model might deal with conspiracy theories about the moon landing more cautiously instead of clearly stating that they are false.
On average, incorrect answers increased by about 7.43 percentage points when the models were made to sound warmer in tone. Cooler and more direct models made fewer mistakes. According to the researchers, AI makes the same trade-off as humans: it sometimes prioritizes being perceived as pleasant rather than being direct.
OpenAI responds to the Axios supply chain attack by rotating macOS code signing certificates, updating apps, and confirming no user data was compromised.