Anthropic, the artificial intelligence company, has decided against a public release for its latest advanced AI model, warning that its capabilities currently pose too great a risk to national security and public safety. The firm announced it will instead provide access to the model, named Claude Mythos Preview, exclusively to a curated group of major technology corporations and organizations responsible for critical software infrastructure.
A Defensive Consortium for a Dangerous Tool
The restricted access program is part of a new initiative called Project Glasswing. Partners include Microsoft, Apple, CrowdStrike, and Amazon Web Services, alongside more than forty entities that build and maintain essential software systems. Anthropic stated the consortium was formed directly in response to discovering the model's unprecedented ability to find security flaws, which the company says "could reshape cybersecurity."
According to Anthropic, Mythos Preview has already identified thousands of previously unknown, high-severity security vulnerabilities embedded in every major operating system and web browser. Some of these flaws have reportedly existed undetected for over twenty years. The company argues that while such powerful AI tools present a massive defensive opportunity, they also dramatically lower the barrier for malicious actors and foreign adversaries to discover and exploit these same weaknesses.
An Urgent Race Against Proliferation
"Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely," Anthropic wrote in its announcement. "The fallout โ for economics, public safety, and national security โ could be severe." The company framed Project Glasswing as "an urgent attempt to put these capabilities to work for defensive purposes" before they become widely available.
The initiative aims to give software defenders a significant advantage. Partner companies will use Mythos Preview to scan their own systems and open-source code, with findings shared across the industry. Anthropic believes these capabilities, while dangerous, can fundamentally improve software security by enabling the rapid discovery and patching of flaws and aiding in the creation of more secure code from the outset.
"Project Glasswing is an important step toward giving defenders a durable advantage in the coming AI-driven era of cybersecurity," the company stated. The move highlights the growing national security implications of advanced AI, a topic intersecting with other defense and policy debates, such as when a court recently ruled against the company in a separate Pentagon-related dispute.
To support the effort, Anthropic will commit up to $100 million in usage credits for partners and provide $4 million in direct donations to open-source security organizations. The decision to withhold the model reflects a cautious, risk-averse approach by a leading AI developer at a time when regulatory frameworks are still evolving. It also underscores how technological advancement is forcing rapid reassessments of security postures across both the private and public sectors.
The development occurs amid broader geopolitical tensions where cyber capabilities are a central concern, similar to the complex security dynamics seen in regions like the Middle East, as evidenced by recent escalations between Israel and Lebanese factions. Anthropic's restrictive release strategy suggests a growing consensus within parts of the tech industry that some AI advancements may require controlled deployment environments akin to other dual-use technologies with significant defensive and offensive potential.
