Understand the clash between the US government and Claude, ChatGPT’s rival Anthropic, the company that created the artificial intelligence assistant Claude, resumed the conversation with the United States government about the military use of its tools, the Financial Times newspaper revealed this Thursday (5). The agreement came back into discussion after an impasse last week over how Anthropic models could be used by the US Armed Forces. The company does not want them to be used for mass surveillance of citizens and autonomous weapons systems, for example. But the American government wants them to be used for any “lawful” purpose. Without an agreement, American President Donald Trump ordered on Friday (27) that the country’s federal agencies stop using Anthropic’s AI programs. Trump’s Secretary of War, Pete Hegseth, threatened to classify the company as a supply chain risk, which would force military companies to cut ties with the company. Even with the opposite order, the US used Claude in the military offensive against Iran, according to The Wall Street Journal. The assistant usually helps the US Army make intelligence assessments, identify targets and simulate battle scenarios. Dario Amodei, executive director of Anthropic, and Donald Trump, US president Reuters/Bhawika Chhabra; Reuters/Nathan Howard Now, with a possible agreement, the American military could again freely use Anthropic’s artificial intelligence models, and the company would be less at risk of being considered a risk. Rival OpenAI, owner of ChatGPT, could also have its plans affected after announcing an agreement last week that allowed the Pentagon to use its AI models. Valued at US$380 billion, Anthropic was the first to sign a contract with the US defense to use AI models for military purposes. The US$200 million agreement was signed in July 2025 and was later signed with other companies such as OpenAI and Google. MORE
Source link
Anthropic resumes talks with the US about military use of its AI, says newspaper
15
previous post
