According to reports, Anthropic and the U.S. Department of Defense have differences regarding the safety guarantees of AI applications

AASTOCKS
2026.01.30 01:57

Reuters reported that the U.S. Department of Defense has encountered disagreements with the artificial intelligence startup Anthropic regarding security measures, leading to a stalemate in negotiations.

The report indicates that the startup's stance on the use of its AI tools has exacerbated differences with the Trump administration. Anthropic representatives expressed concerns that their tools could be used to monitor U.S. citizens or assist in weapon targeting without adequate oversight. However, the Department of Defense is dissatisfied with the usage regulations. According to a departmental AI strategy memorandum released on the 9th of this month, authorities argue that as long as it complies with U.S. law, commercial AI technology should be deployable without being restricted by corporate usage policies. Nevertheless, authorities will still need to rely on Anthropic's cooperation, as the company's models are trained to avoid actions that could cause harm, and Anthropic employees must readjust their AI to meet the authorities' needs.

The report states that a Department of Defense spokesperson did not respond to requests for comment. Anthropic stated that its AI is widely used by the U.S. government for national security tasks, and the company is having productive discussions with the Department of Defense on how to continue this work.

Anthropic is one of several AI developers that secured contracts with the Department of Defense last year, with the contract valued at up to $200 million. Other selected candidates include Alphabet (GOOGL.US) subsidiary Google, xAI, and OpenAI