More than 600 employees at Google have signed an internal letter pressing CEO Sundar Pichai to walk away from any classified artificial intelligence contract with the Pentagon. The signatories, drawn from Google DeepMind and Google Cloud, argue the company cannot guarantee its tools won't be used for mass surveillance or autonomous weapons, echoing concerns that led to Anthropic being blacklisted by the Department of Defense earlier this year.
The letter, sent Monday and obtained by The Hill, states that Google currently lacks the technical controls to prevent its AI from being deployed in ethically dangerous ways. “As people working on AI, we know that these systems can centralize power and that they do make mistakes,” the employees wrote. “We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses.”
The Information first reported this month that Google is negotiating with the Pentagon to deploy its Gemini AI models in classified environments. Under the proposed agreement, the Pentagon would be allowed to use Google’s AI for all lawful purposes, though language has been discussed to bar its use in mass surveillance or autonomous weapons without human oversight. The employees argued such provisions are unenforceable in practice.
“The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads,” the letter says. The employees warned that accepting the deal could cause “irreparable harm to Google’s reputation, business and role in the world.”
Google already has a contract with the Pentagon for nonclassified AI work through its genAI.mil platform. But the signatories contend that moving into classified settings crosses a red line. “A lot of it comes down to what technical safeguards companies can put in place; but the DoD specifically prohibits any controls,” one organizer said in a press release. “If leadership is truly serious about preventing downstream harms, they must reject classified workloads entirely for now.”
The controversy comes months after the Pentagon labeled Anthropic a supply chain risk for refusing to allow its models to be used for any lawful purpose. Anthropic has since sued the Trump administration over the designation, which is typically reserved for foreign adversaries. Hundreds of Google and OpenAI employees signed a letter supporting Anthropic at the time.
OpenAI, maker of ChatGPT, struck a deal with the Pentagon just hours after Anthropic’s designation, sparking backlash. CEO Sam Altman later admitted the move “looked opportunistic and sloppy” and said the company asked for additions regarding domestic surveillance. The Pentagon’s push for unrestricted AI use has raised broader questions about oversight, especially as the Defense Department has also faced scrutiny over its optional vaccine policy and the seizure of Stars and Stripes, which troops rely on for independent news.
The Hill has reached out to Google and the Pentagon for comment. The internal revolt underscores growing unease among tech workers about military applications of AI, a tension that has flared repeatedly as Silicon Valley deepens its ties with the defense establishment.
