A new policy push in Washington could make AI safety testing more than a voluntary exercise.
Advocacy group Americans for Responsible Innovation told U.S. officials on Monday that cutting-edge AI models should be screened for national security risks before they are publicly released, and that companies whose models fail those reviews should be blocked from winning lucrative U.S. government contracts.
The recommendation lands at a moment when the White House is already wrestling with the risks posed by increasingly powerful frontier systems.
The Trump administration is “grappling with the implications” of Anthropic’s Mythos, a model officials fear could make “complex cyberattacks easier and quicker to execute,” creating potential national security threats.
That concern gives the group’s proposal a sharper edge: instead of waiting until powerful models are already in circulation, it wants screening tied directly to who gets federal business.
The proposal would raise the bar for major AI developers
Americans for Responsible Innovation is not calling for blanket oversight of every AI company.
Reuters said the group wants the administration to build a review system focused on the biggest frontier model developers — specifically companies that spend $100 million or more a year on compute to train advanced models, or those generating at least $500 million annually from AI products and services.
California already uses a “similar threshold” in a safety reporting law enacted last year, giving the proposal a ready-made precedent rather than making it sound purely theoretical.
The idea would also formalize a process that is currently much looser.
The Hindu Business Line shared that the U.S. Center for AI Standards and Innovation, or CAISI, already reviews some AI models through voluntary agreements with OpenAI, Anthropic, and, more recently, Google, Microsoft, and xAI.
But the advocacy group wants something stronger: mandatory requirements led by CAISI, backed by Congress, and enforced through a permanent office inside the U.S. Department of Commerce.
Government money becomes the leverage point
What makes the proposal notable is the mechanism it uses. Instead of focusing first on bans or public deployment rules, it targets access to government contracts.
That is a powerful lever because federal spending can shape market behavior without formally shutting companies out of the broader commercial AI race. The group said companies should have to pass safety review to remain eligible for government business, turning public procurement into a pressure tool for AI governance.
That approach could appeal to policymakers who want stronger safeguards without immediately rewriting the entire regulatory framework around advanced models.
It also reflects a larger shift in the AI debate: safety is no longer being framed only as a moral or scientific concern, but as a practical condition for doing business with the state.
Mythos keeps raising the stakes
The timing of the proposal is inseparable from the debate over Anthropic’s Mythos. Officials are increasingly concerned that some frontier systems may not just assist with benign productivity tasks, but could accelerate cyberattacks or even help with weapons development.
Americans for Responsible Innovation explicitly urged the administration to vet upcoming models for both “cyberattack” and “weapons development capabilities,” suggesting that the next phase of AI policy may be shaped less by consumer chatbots and more by dual-use national security fears.
For now, the recommendation remains just that — a recommendation. But it points to where the policy fight is headed.
If Washington adopts the idea, passing a safety review may no longer be just a reputational advantage for AI labs. It could become the price of admission for some of the most valuable government contracts in the market.