The Center for AI Standards and Innovation (CAISI), a division of the US Department of Commerce, has signed agreements with Google DeepMind, Microsoft, and xAI that would give the agency the ability to vet AI models from these organizations and others prior to their being made publicly available.
According to a release from CAISI, which is part of the department’s National Institute of Standards and Technology (NIST), it will “conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security.”
The three join Anthropic and OpenAI, which signed similar agreements almost two years ago during the Biden administration, when CAISI was known as the US Artificial Intelligence Safety Institute.
An August 2024 release about those agreements indicated that the institute planned to provide feedback to both companies on “potential safety improvements to their models, in close collaboration with its partners at the UK AI Safety In
Microsoft, Google DeepMind and Elon Musk’s xAI have offered to let the U.S. government access new AI models ahead of their general release, which sets up a new phase in Silicon Valley’s often fractious relationship with the US government’s fear of AI threats, based on the latest report of AI companies offering models to U.S. officials in the name of security review, in the hopes that government analysts can vet frontier AI systems for security threats like cyberattacks and military use before it is exposed for public consumption by developers and users, and, inevitably, those who should have no business […]
Agreements with Microsoft, Google DeepMind and xAI focus largely on recognizing cybersecurity, biosecurity and chemical weapons risks
The US government has struck deals with Google DeepMind, Microsoft and xAI to review early versions of their new AI models before they are released to the public.
The Center for AI Standards and Innovation (CAISI), part of the US Department of Commerce, announced the agreements on Tuesday, saying the review process would be key to understanding the capabilities of new and powerful AI models as well as to protecting US national security. These collaborations will help the federal government “scale (its) work in the public interest at a critical moment”, the agency said in a press release.
Continue reading...
A slew of tech earnings predict an expensive future for everyday electronics buyers, and big developments in the UK tech world
Hello, and welcome to TechScape. I’m your host, Blake Montgomery, US tech editor at the Guardian. Today, we examine how a slew of tech earnings predict an expensive future for everyday electronics buyers and two big developments in the UK tech world: Workers at Google DeepMind, headquartered in London, petitioned to unionize to stop their employer’s military work. And UK police are increasingly adopting live facial recognition, with considerable consequences.
Continue reading...
Google DeepMind, Microsoft, and Elon Musk's xAI have agreed to allow the US government to review new AI models before they're released to the public. In an announcement on Tuesday, the Commerce Department's Center for AI Standards and Innovation (CAISI) says it will work with the AI companies to perform "pre-deployment evaluations and targeted research to better assess frontier AI capabilities."
CAISI, which started evaluating models from OpenAI and Anthropic in 2024, says it has performed 40 reviews so far. Both companies "have renegotiated their existing partnerships with the center to better align with priorities in President Donald Trum …
Read the full story at The Verge.
About a week into the Musk v. Altman trial, we've heard from some of the most powerful people in tech - including OpenAI president Greg Brockman, Elon Musk's fixer Jared Birchall, and Musk himself. But one of the most prominent characters is hovering around the margins: Demis Hassabis, CEO of Google DeepMind.
Hassabis is the architect of Google's in-house AI lab. He founded DeepMind as an independent startup in 2010 and sold it to Google four years later, reportedly for between $400-650 million. Since then, he's been at the helm of many of Google's largest AI research breakthroughs, like AlphaFold - and he's climbed the ladder from there, …
Read the full story at The Verge.
Staffers at Google DeepMind's headquarters have voted to unionize in an effort to prevent the AI firm's technology from being used by Israel and the US military. In a letter to Google management on Tuesday, employees requested that the Communication Workers Union (CWU) and Unite the Union be recognized as joint representatives, with 98 percent of CWU members at DeepMind voting in support of the move.
"We don't want our AI models complicit in violations of international law, but they already are aiding Israel's genocide of Palestinians," an unnamed DeepMind employee said in a statement shared by the CWU. "Even if our work is only used for ad …
Read the full story at The Verge.
Exclusive: Worker pointed to Iran war and Pentagon’s Anthropic feud as indications the department is ‘not a responsible partner’
Workers developing Google’s artificial intelligence products in the UK have voted to unionize, in part out of concerns about a deal between the company and the US military that was announced last week.
In a letter slated to go to management on Tuesday and shared exclusively with the Guardian, workers at Google DeepMind, the company’s AI research laboratory, requested recognition of the Communication Workers Union and Unite the Union as joint representatives of the lab’s UK-based staff.
Continue reading...