Barry Diller trusts Sam Altman. But ‘trust is irrelevant’ as AGI nears, he says.
Barry Diller defended OpenAI CEO Sam Altman, while warning that AGI remains an unpredictable force needing guardrails.
Showing 1–3 of 3
Barry Diller defended OpenAI CEO Sam Altman, while warning that AGI remains an unpredictable force needing guardrails.
Everyone is adopting AI coding tools. Engineers are writing code faster than ever. But are organizations actually delivering value faster? That’s not obvious. I wrote Enabling Microservice Success with a big focus on engineering enablement, guardrails, automated testing, active ownership, and light touch governance. I didn’t know AI coding agents were coming, but it turns […]
An AI agent that revealed sensitive data without being asked. An agent that overruled its own guardrails. Another that sent credentials to an attacker via Telegram, because it forgot it wasn’t supposed to do so after a reset. It’s no secret that AI agents have huge potential, balanced by equally big risks. What’s becoming apparent, however, is how quickly agentic systems can veer wildly off course and start exposing critical information under real-world conditions. A look at just how easily this can happen emerges from Phishing the agent: Why AI guardrails aren’t enough, a report on tests conducted by cloud identity and access management (IAM) company Okta Threat Intelligence, which uncovered all of the problems cited above, and more. Their research focused on OpenClaw, a model-agnostic multi-channel AI assistant which has seen explosive growth inside enterprises since appearing in late 2025. The Telegram hack In common with the growing list of rival agents, OpenClaw is only as useful