Barry Diller trusts Sam Altman. But ‘trust is irrelevant’ as AGI nears, he says.
Barry Diller defended OpenAI CEO Sam Altman, while warning that AGI remains an unpredictable force needing guardrails.
O'Reilly AI-ML·
Everyone is adopting AI coding tools. Engineers are writing code faster than ever. But are organizations actually delivering value faster? That’s not obvious. I wrote Enabling Microservice Success with a big focus on engineering enablement, guardrails, automated testing, active ownership, and light touch governance. I didn’t know AI coding agents were coming, but it turns […]
Read full articleBarry Diller defended OpenAI CEO Sam Altman, while warning that AGI remains an unpredictable force needing guardrails.
Attackers too are looking to cash in on the AI coding craze, adapting their supply-chain techniques to target coding agents themselves. Many AI agents autonomously scan package registries such as NPM and PyPI for components to integrate into their coding projects, and attackers are beginning to take advantage of this. Bait packages with persuasive descriptions and legitimate functionality have cropped up on such registries, while packages that target names that AI coding agents are likely to hallucinate as dependencies are another attack vector on the horizon. Researchers from security firm ReversingLabs have been tracking one such supply-chain attack that uses “LLM Optimization (LLMO) abuse and knowledge injection” to make packages more likely to be discovered and chosen by AI agents. Dubbed PromptMink, the attack was attributed to Famous Chollima, one of North Korea’s APT groups tasked with generating funds for the regime by targeting developers and users from the cryptocurrency and
An AI agent that revealed sensitive data without being asked. An agent that overruled its own guardrails. Another that sent credentials to an attacker via Telegram, because it forgot it wasn’t supposed to do so after a reset. It’s no secret that AI agents have huge potential, balanced by equally big risks. What’s becoming apparent, however, is how quickly agentic systems can veer wildly off course and start exposing critical information under real-world conditions. A look at just how easily this can happen emerges from Phishing the agent: Why AI guardrails aren’t enough, a report on tests conducted by cloud identity and access management (IAM) company Okta Threat Intelligence, which uncovered all of the problems cited above, and more. Their research focused on OpenClaw, a model-agnostic multi-channel AI assistant which has seen explosive growth inside enterprises since appearing in late 2025. The Telegram hack In common with the growing list of rival agents, OpenClaw is only as useful
The elite want to protect themselves from AI. What about the rest of us?
Adherence to past strategies means falling behind, both literally and figuratively, with slower development and decreased functionality.
Among the benefits of a union workforce, according to Otis Worldwide CEO Judy Marks: ‘We have a common mission.’
Learn how to get the most out of Claude Code The post How to Improve Claude Code Performance with Automated Testing appeared first on Towards Data Science.
Factory has secured $150 million in funding at a $1.5 billion valuation to expand its AI-driven coding platform for enterprise engineering teams. The round was led by Khosla Ventures, with participation from Sequoia Capital, Insight Partners, and Blackstone. Keith Rabois has joined the company’s board. Founded in 2023 by Matan Grinberg, the company develops AI […]