Anthropic has filed a lawsuit against the United States Department of Defense, challenging the agency’s decision to label the firm a ‘supply-chain risk’, a designation that could limit its access to federal technology contracts. The complaint was filed (shared by The Washington Post) on March 9 in the United States District Court for the Northern District of California under Case No. 3:26-cv-01996. Anthropic claims the designation was imposed after the company refused to remove safety restrictions on how its AI systems can be used by the military, and argues the move could damage its ability to work with government agencies and defense contractors.
According to the complaint, Anthropic argues that the Pentagon’s decision was taken without sufficient procedural transparency and was effectively meant to punish the company. The company says the designation followed a breakdown in discussions with defense officials over the acceptable scope of military use for its AI models. The AI firm maintains that it has built strict safety guardrails into its technology to prevent applications it considers ethically problematic, including the development of fully autonomous lethal weapons or large-scale surveillance systems targeting civilians. Therefore, the company declined requests to remove or weaken those safeguards, leading to the conflict with defense authorities.
The Dario Amodei-led firm contends that the government’s action could have far-reaching consequences beyond direct defense contracts. Many technology firms working with the federal government depend on a network of subcontractors and digital-service providers. If Anthropic is formally treated as a supply-chain risk, contractors may avoid its AI tools to prevent regulatory complications, potentially shutting the company out of large segments of the government technology ecosystem.
This latest lawsuit suggests that the dispute may ultimately revolve around who controls the terms under which AI technologies are deployed in military contexts. The company maintains that private developers should retain the ability to set ethical boundaries on the use of their systems. On the other hand, government officials often highlight the need for flexibility in defense applications, arguing that restrictions imposed by vendors could limit operational effectiveness and prevent rapid responses during crises.
The scenario becomes even more critical as, after Anthropic refused to relax its safety restrictions and negotiations with the United States Department of Defense collapsed, its biggest rival, OpenAI, announced a new deal with the Department of Defense to deploy its AI models on classified government systems. However, soon after the announcement, the Sam Altman-led firm faced widespread criticism. Even OpenAI’s robotics division head, Caitlin Kalinowski, who leads the company’s hardware development initiatives, announced her resignation, citing the rushed nature of the deal.
The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →