Over the years, enterprise IT execs have gotten frighteningly comfortable having little control or visibility over mission-critical apps, from SaaS to cloud and even cybersecurity. But generative AI (genAI) and agentic systems are taking that problem to a new extreme, with vendors able to dumb down a system IT is paying billions for without so much as a postcard.
It’s not necessarily that AI changes are made to boost profits or revenue. Even if we accept the vendor argument that such changes are in the customer’s interest, companies still need for their systems to do on Thursday what they did on Tuesday, let alone what they did when the purchase order was signed.
Alas, that is no longer the case.
Consider a recent report from Anthropic that detailed a lengthy list of changes the company made to some of its AI offerings — including one that explicitly dumbed down answers — without asking or telling customers beforehand.
The report describes various changes the Anthropic team made on t
Teradata has launched its Autonomous Knowledge Platform, a new flagship offering that brings together data, analytics, AI development, agent orchestration, and governance across cloud, on-premises, and hybrid environments.
The target customer is an enterprise that has moved beyond testing AI assistants and is now asking harder questions: which data agents can use, what actions they can take, how much they will cost to run, and who is accountable when something goes wrong.
The company said the platform builds on its existing database engine and governance infrastructure, while adding new capabilities and more tightly integrating existing ones, including AI Studio, the Tera natural-language workspace, Tera Agents, Elastic Compute on Teradata Cloud, and the upcoming Teradata Factory for on-premises AI workloads.
Teradata is entering a competitive market with this. Snowflake, Databricks, Microsoft, Oracle, and Salesforce are all trying to persuade customers that their platforms should beco
Agreements with Microsoft, Google DeepMind and xAI focus largely on recognizing cybersecurity, biosecurity and chemical weapons risks
The US government has struck deals with Google DeepMind, Microsoft and xAI to review early versions of their new AI models before they are released to the public.
The Center for AI Standards and Innovation (CAISI), part of the US Department of Commerce, announced the agreements on Tuesday, saying the review process would be key to understanding the capabilities of new and powerful AI models as well as to protecting US national security. These collaborations will help the federal government “scale (its) work in the public interest at a critical moment”, the agency said in a press release.
Continue reading...
Insider Brief PRESS RELEASE — Applied Digital Corporation (NASDAQ: APLD), a designer, builder, and operator of high-performance, sustainably engineered data centers and colocation services for artificial intelligence, cloud, networking, and blockchain workloads, has announced the closing of a $300 million senior secured bridge facility led by Goldman Sachs. The facility is intended to fund the continued development […]
Microsoft and Google are adding new controls for AI agents, as enterprise IT teams try to keep up with tools that can access corporate data and act across business applications.
Microsoft’s Agent 365, made generally available for commercial customers on May 1, is designed to help organizations discover, govern, and secure AI agents, including those operating across Microsoft, third-party SaaS, cloud, and local environments.
Google’s new AI control center for Workspace, announced this week, focuses more specifically on giving administrators a centralized view of AI usage, security settings, data protection controls, and privacy safeguards within Workspace.
The timing reflects a shift in enterprise AI use. Many companies are no longer just testing chatbots, but are beginning to use agents that can reach corporate systems and carry out tasks on behalf of users.
Analysts said the shift changes how CIOs and CISOs should think about AI agents inside the enterprise.
“By placing agent controls
Cybersecurity was already under strain before AI entered the stack. Now, as AI expands the attack surface and adds new complexity, the limits of legacy approaches are becoming harder to ignore. This session from MIT Technology Review’s EmTech AI conference explores why security must be rethought with AI at its core, not layered on after…
While fears that artificial intelligence will take all human jobs are likely overblown, experts agree that to stay relevant, cyber and IT professionals need to incorporate AI into their tool boxes.
Let’s be honest about what’s happening in the market: Public cloud has become the easy button for AI. It offers immediate access to compute, storage, managed services, foundation model ecosystems, automation tools, and global reach. For enterprises that want to launch quickly, it is hard to argue against it. You do not need to spend years standing up infrastructure, hiring specialized operations teams, or engineering your own scalable environment before you can test your first use case.
This is exactly why adoption continues even as confidence in cloud resilience becomes more complicated. This article about the expanding cloud market makes the point clearly. Enterprises are not pulling back from hyperscale clouds despite numerous outages. They continue to move forward because the benefits of agility, scalability, and rapid deployment are too valuable to ignore. The cloud remains deeply embedded in business operations, and for many organizations, stepping away would undo years, often de