Barry Diller trusts Sam Altman. But ‘trust is irrelevant’ as AGI nears, he says.
Barry Diller defended OpenAI CEO Sam Altman, while warning that AGI remains an unpredictable force needing guardrails.
Fast Company AI·
The elite want to protect themselves from AI. What about the rest of us?
Read full articleBarry Diller defended OpenAI CEO Sam Altman, while warning that AGI remains an unpredictable force needing guardrails.
Everyone is adopting AI coding tools. Engineers are writing code faster than ever. But are organizations actually delivering value faster? That’s not obvious. I wrote Enabling Microservice Success with a big focus on engineering enablement, guardrails, automated testing, active ownership, and light touch governance. I didn’t know AI coding agents were coming, but it turns […]
An AI agent that revealed sensitive data without being asked. An agent that overruled its own guardrails. Another that sent credentials to an attacker via Telegram, because it forgot it wasn’t supposed to do so after a reset. It’s no secret that AI agents have huge potential, balanced by equally big risks. What’s becoming apparent, however, is how quickly agentic systems can veer wildly off course and start exposing critical information under real-world conditions. A look at just how easily this can happen emerges from Phishing the agent: Why AI guardrails aren’t enough, a report on tests conducted by cloud identity and access management (IAM) company Okta Threat Intelligence, which uncovered all of the problems cited above, and more. Their research focused on OpenClaw, a model-agnostic multi-channel AI assistant which has seen explosive growth inside enterprises since appearing in late 2025. The Telegram hack In common with the growing list of rival agents, OpenClaw is only as useful
Researchers show scammers are using AI-manipulated footage of celebrity interviews to trick users into sharing their personal data.
Discover how scammers are using AI deepfakes of celebrities like Taylor Swift in TikTok ads, and learn five expert tips for spotting manipulated media. The post TikTok Scam Ads Use AI to Impersonate Celebrities Like Taylor Swift appeared first on Copyleaks.
Scammers are using AI-generated videos of celebrities including Taylor Swift and Rihanna to promote shady services on TikTok, according to authentication company Copyleaks. The ads typically show celebrities in interview settings, such as red carpets, podcasts, or talk shows, and often manipulate real footage with AI, the company said. Many promote rewards programs claiming users can earn money by watching TikTok content and giving feedback. TikTok's official branding appears in some of the ads, though users are redirected to third-party services that ask for personal information. In one ad, a realistic AI avatar of Swift urges users to s … Read the full story at The Verge.
Taylor Swift has been at the center of AI imitation controversies for years, and now, she's become the latest celebrity who's escalating attempts to protect herself from AI copycats. As usual, however, the legal system intersects with technology in complicated ways - and Swift's efforts may be a long shot. In trademark applications filed last week, Swift's team asked for protection for two phrases spoken by the singer: Hey, it's Taylor Swift and Hey, it's Taylor. The trademark applications, filed by TAS Rights Management on behalf of Swift, include audio clips of Swift saying the two phrases as part of a promotion for her latest album. "Hey … Read the full story at The Verge.
Taylor Swift has filed new trademark applications for two voice clips and one image that a trademark attorney says are “specifically designed” to protect the pop superstar from threats posed by AI.