New Jersey Democratic Congressman Josh Gottheimer has launched a direct challenge to artificial intelligence company Anthropic, demanding detailed explanations for what he characterizes as a dangerous retreat from security commitments. The inquiry follows both a reported leak of source code for the company's Claude Code tool and a significant February policy change where Anthropic softened its binding safety pledge.

Safety Pledge Weakened Amid Security Incidents

In late February, Anthropic revised its core AI safety policy, eliminating a previous commitment to halt development if its models began to outpace established safety procedures. The company replaced this with what it described as "nonbinding but publicly-declared" objectives. This policy shift occurred against a backdrop of increasing security concerns, including the accidental exposure of part of Claude Code's internal source code, which multiple outlets reported this week.

Read also
Technology
OpenAI Proposes 'New Deal' Framework for Superintelligent AI Governance
OpenAI has published a comprehensive policy framework for governing superintelligent AI, proposing radical economic interventions including a public wealth fund and taxes on automated labor.

In a letter to Anthropic CEO Dario Amodei, Gottheimer expressed sharp criticism of the timing and substance of these decisions. "Given that we know Claude has been a repeated target of malign Chinese Communist Party actors, combined with Claude Code's source code recently being leaked to the public, I don't understand why Anthropic would risk walking back any of its security measures," the congressman wrote. He emphasized that "the safety and security of our AI systems are critical to our national security."

National Security and Chinese Cyber Threats

Gottheimer's concerns extend beyond the immediate leak to broader strategic vulnerabilities. He highlighted Anthropic's own November disclosure that Chinese hackers had utilized its coding tool to execute a large-scale cyberattack with minimal human oversight. The congressman is now pressing the firm on its specific plans to prevent similar malicious uses of its technology and whether it anticipates these risks escalating with future, more advanced models.

The threat is not limited to direct cyberattacks. Gottheimer also cited Anthropic's February identification of three "industrial-scale campaigns" by China-based AI labs aimed at "illicitly extracting Claude's capabilities to improve their own models"—a process known as distillation. This corporate espionage raises profound national security questions, particularly as Anthropic's technology becomes more integrated with U.S. defense systems.

"We cannot allow the CCP to reverse engineer and exploit American AI, built for our national defense," Gottheimer warned. "Claude is a critical part of our national security operations. If it is replicated, we sacrifice the competitive edge we have worked so diligently to maintain." He specifically named Chinese firm DeepSeek as an entity Anthropic must work to prevent from conducting such distillation campaigns.

Company Response and Political Context

Anthropic has responded to the source code leak by characterizing it as an issue of "human error, not a security breach," according to a company spokesperson who spoke to CNBC. The spokesperson stated that no sensitive customer data or credentials were exposed and that the company is "rolling out measures to prevent this from happening again."

The confrontation occurs as Democrats navigate complex political terrain, balancing domestic policy priorities with urgent technological and defense challenges. Gottheimer's aggressive posture reflects a growing congressional focus on the national security dimensions of the AI race, an area where bipartisan concern is mounting. The episode underscores the tension between rapid technological innovation and the imperative for robust safeguards, especially when facing sophisticated state-level adversaries.

As the 2024 election cycle intensifies, scrutiny of corporate partnerships with the defense sector and the security of critical technologies is likely to increase. This incident places Anthropic, a leading AI firm, squarely at the intersection of technology policy and national security—a space where congressional oversight is becoming more assertive.