understand the dispute could delay military use of AI

by Marcelo Moreira

A dispute between the Pentagon and a US technology company could limit the use of artificial intelligence (AI) by US military forces, just as these tools have become central to war operations.

The conflict involves the Anthropicthe company responsible for Claude AI modelused by US defense agencies in tasks such as intelligence data analysis, operations planning, simulations and cybersecurity.

Last year, the War Department signed contracts with the company and other companies in the artificial intelligence sector to accelerate the integration of technology into the American military, amid the technological race with China and Russia. Agreements could reach around US$200 million per supplier.

The impasse

In January, however, the US Department of War began demanding changes to contracts that were already in force, to allow models used by the armed forces to be used for any purpose considered legal. The requirement came at a time when the Pentagon was expanding the use of AI in intelligence operations and military planning.

Information published at the beginning of February revealed that Pentagono would have already used the Claude model in the analysis of intelligence data during the operation carried out in January in Venezuelawhich resulted in the capture of dictator Nicolás Maduro. The revelation increased tension in ongoing negotiations between the War Department and Anthropic, as company executives did not welcome the news that the system had been used to conduct a real military operation, something that could contravene the company’s internal rules of use.

After the revelation, Anthropic began to demand, in negotiations, guarantees that its technology would not be used in mass surveillance within the United States or in the development of fully autonomous weapons, limits set out in the company’s own policy.

The Pentagon did not like Anthropic’s reaction and pressed even harder for changes to the current contract, arguing that the Armed forces must have the freedom to use tools considered essential to national security.

Negotiations reached breaking point at the end of February, when Anthropic refused the contractual change that would allow unrestricted use of the model, deciding not to give in to pressure from the War Department, which had given the company an ultimatum. Without an agreement, President Donald Trump’s government then ordered the suspension of the use of the company’s technology in federal agencies and authorized the Pentagon to begin replacing the model in military programs.

Severe punishment

But Anthropic wasn’t just replaced. She also was classified by the Trump administration as a risk to the national security supply chaina decision that prevents other companies that provide services to the defense of the United States from maintaining any commercial relationship with the company without government authorization.

The measure was considered unusual, as this type of classification is normally applied against foreign suppliers considered a threat to American security, such as those from China.

Furthermore, the government determined that the entire federal structure would stop using the company’s products. Agencies were instructed to remove the Claude model from systems linked to public contracts and begin migrating to technologies from other suppliers. Trump set a six-month deadline for the system to be completely replaced in military and intelligence programs.

This week, Anthropic went to court against the United States government to try to suspend the decision. The company claims that the classification as a risk to the supply chain was abusive, has no legal basis and could cause billions in losses, in addition to compromising contracts already in progress with federal agencies and companies in the defense sector.

Replacement and risk for military use of AI

After the breakup, the Pentagon began negotiations with Anthropic’s competitors to take over contracts with the department. OpenAI, owner of ChatGPT, last week announced an agreement with the Department of War to provide artificial intelligence models for networks used by US defense. Other companies in the sector, such as Elon Musk’s xAI, also began to be considered in new military projects.

The exchange of artificial intelligence models in the US defense system, however, is expected to be slow. Still in the midst of the fight with Anthropic, the US press reported that the Pentagon was using Claude in the operation against Iran.

Experts say the confrontation between the Pentagon and Anthropic has exposed a structural problem in the United States’ strategy for the military use of artificial intelligence. They warn that the episode occurs precisely at a time when Washington is trying to accelerate the use of the tool to maintain an advantage over China and Russia.

For technology and innovation specialist Fernando Barra, author of the book “Augmented artificial intelligence”, the conflict between the company and the US government should not halt the military use of artificial intelligence in the American armed forces, but it could limit and delay, in the short term, the pace of adoption of the tool precisely at the moment when technological speed has become a strategic advantage.

HAS People’s GazetteBarra stated that artificial intelligence is already integrated into core areas of modern military operations, which increases the impact of disputes with private suppliers, such as the one involving Anthropic and the War Department.

“Today, AI is already used in intelligence, threat, planning, modeling, simulation, classified data analysis, cyber operations and decision support. In the conflict in the Middle East, American officials said that AI has been used to drastically shorten processing time and support the identification of threats, still with a final human decision”, he said. The expert assesses that the dispute between the Pentagon and Anthropic shows that artificial intelligence has started to directly influence the military power structure.

“AI has already left the support area and entered the heart of the military power architecture,” he stated.

Barra explains that the American government today depends on the pace of innovation, infrastructure and experts from private industry to maintain technological superiority, which makes disputes with private suppliers a potentially strategic problem. According to him, if several companies start to impose restrictions, encouraged by Anthropic’s decision, the The Pentagon may face fragmentation of suppliers, loss of interoperability between systems and dependence on a smaller number of partners, a scenario that increases operational risks in times of tension.

Barra states that the episode could lead the American government to try to increase control over the development of artificial intelligence to reduce dependence on private companies and guarantee direct access to the technology. For him, the Trump administration’s decision to classify Anthropic as a risk to the supply chain shows that the White House is willing to use tougher instruments to prevent suppliers from limiting the use of systems now considered strategic for US security.

“If the government understands that a technology has become strategic infrastructure, it is willing to use heavy instruments to guarantee access and discipline suppliers,” said the analyst.

Source link

You may also like

Leave a Comment

Este site usa cookies para melhorar a sua experiência. Presumimos que você concorda com isso, mas você pode optar por não participar se desejar Aceitar Leia Mais

Privacy & Cookies Policy

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.