National Economic Council Director Kevin Hassett revealed on Wednesday that the White House is exploring an executive order that would force artificial intelligence models to undergo a rigorous evaluation process akin to the Food and Drug Administration’s drug approval system. The move comes as the Trump administration confronts the security challenges posed by Mythos, a cutting-edge AI model from Anthropic that has sparked widespread alarm.
“We’re studying possibly an executive order to give a clear roadmap to everybody about how this is going to go and how future AIs that also potentially create vulnerabilities should go through a process so that, you know, they’re released in the wild after they’ve been proven safe, just like an FDA drug,” Hassett said during an appearance on Fox Business’s “Mornings with Maria.”
The proposal would represent a dramatic departure from the pro-innovation, light-touch regulatory stance the Trump administration championed after taking office in 2024. Some AI advocates worry that such a process could stifle development, but Hassett framed it as a necessary safeguard for national security. The administration is reportedly drafting new guidance on AI-related cybersecurity and safety risks, with Hassett being one of the first officials to openly discuss the potential plan.
Anthropic released Mythos last month to a limited group, including Wall Street banks, warning that its advanced capabilities were too dangerous for public release. According to the company, Mythos can identify decades-old security flaws in web browsers, software, and critical infrastructure. In the right hands, it could help patch vulnerabilities and protect against hackers, but in the wrong hands, it could empower malicious actors to exploit weaknesses faster.
The limited rollout sent shockwaves through both Wall Street and Washington, prompting Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell to convene financial leaders to address cybersecurity risks. Hassett noted that he and Bessent are meeting with major banks on Wednesday to “catch up on the progress that they’re making.”
Despite earlier tensions between the Pentagon and Anthropic, the White House has softened its tone and is now working with the company and other tech firms to understand Mythos. “The Mythos model makes it so that vulnerabilities that we didn’t know existed before could potentially be found with this more powerful tool,” Hassett said. “But we have scrambled an all-of-government effort and all of the private sector to coordinate and to make sure that before this model is released out into the wild, that it’s been tested left and right.”
Federal agencies are already testing Mythos, and Anthropic has briefed the Cybersecurity and Infrastructure Security Agency and the National Institute of Standards and Technology (NIST). NIST has tested AI models from OpenAI and Anthropic under a 2024 agreement, and on Tuesday it announced similar pacts with xAI, Microsoft, and Google DeepMind. It remains unclear whether the evaluation process Hassett described would be tied to these existing agreements.
The potential executive order marks a pivotal moment for AI regulation, potentially reshaping how advanced models are deployed in the United States. As the administration weighs its next steps, the balance between fostering innovation and ensuring public safety hangs in the balance.
