Political Strategist Sides with AI Firm in Defense Department Dispute

Former White House strategist Steve Bannon has publicly aligned himself with artificial intelligence company Anthropic in its high-stakes confrontation with the Pentagon over the military use of advanced AI. Bannon stated the company was correct to demand contractual prohibitions against deploying its technology in fully autonomous lethal weapons systems.

Pentagon Labels AI Developer a National Security Risk

The Defense Department, after negotiations with Anthropic over safety restrictions collapsed, has insisted on rights to use the company's Claude AI for "all lawful purposes." Last month, it escalated the conflict by formally designating Anthropic as a supply chain risk—a classification typically applied to foreign adversaries. This move came after Anthropic filed suit against the Pentagon for the blacklisting, which followed the company's refusal to allow unrestricted use of its AI for domestic mass surveillance and autonomous weaponry.

Read also
Defense
Historic Naval Buildup Faces Critical Workforce Shortage, Threatening Fleet Expansion
The U.S. Navy's historic $66 billion shipbuilding push confronts a severe shortage of skilled workers, threatening to delay fleet expansion needed to relieve overstretched global deployments.

"I think Anthropic had it right," Bannon said Thursday during an appearance at the Semafor World Economy conference. He expressed respect for Defense Secretary Pete Hegseth but argued the potential dangers of unrestricted military AI necessitate oversight. "It's very, very complicated... but I think in this situation, right, it's almost too dangerous," Bannon continued. "That's why you need a sort of atomic energy commission, you need some sort of modicum of [regulation]."

Executive Order and Operational Fallout

Following the Pentagon's risk designation, President Trump issued an order directing all federal agencies to immediately cease using any Anthropic products. This has created significant operational challenges, forcing agencies and their contractors to scramble on how to excise a major AI vendor from complex federal supply chains. Anthropic's technology has been integrated into Defense Department and intelligence community systems since late 2024 through a partnership with data analytics firm Palantir.

The core of the legal and policy fight hinges on Anthropic's founding principles of transparency and enforced safety guardrails. The company maintains that current AI systems lack the reliability required for life-or-death military decisions and that unfettered use would dangerously expand government surveillance capabilities. The Pentagon has rejected this argument, asserting its prerogative to utilize commercially available technology for national defense.

This dispute occurs against a backdrop of broader Pentagon efforts to secure substantial funding for autonomous warfare initiatives and reflects ongoing tensions within the Defense Department's leadership over technology and strategy.

Signs of a Potential Thaw

Despite the severe breakdown, there are indications the standoff may be easing. According to an Axios report, Anthropic CEO Dario Amodei is scheduled to meet with White House chief of staff Susie Wiles on Friday. This high-level engagement suggests both sides may be seeking a path to de-escalation, though the fundamental disagreement over ethical boundaries for military AI remains unresolved.

The controversy also intersects with wider congressional debates over surveillance authority, as lawmakers return to Washington facing renewed fights over FISA reauthorization. Bannon's intervention adds a notable political dimension to the technical and ethical debate, highlighting a rare point of agreement between a prominent Trump ally and a tech industry focused on AI safety. The outcome will set a critical precedent for how the U.S. government procures and deploys cutting-edge artificial intelligence, with significant implications for both national security and the burgeoning AI industry.