most visited

Claude AI Footprint at Bombing Caracas Operation

The Wall Street Journal has revealed that the Claude AI model was used in a classified US military operation to kidnap Nicolas Maduro. This is the first operational use of a large language model in a US military mission.
News ID: 87404
Publish Date: 15February 2026 - 10:13

TEHRAN (Defapress) - The Wall Street Journal reported on Saturday that the Claude AI model developed by Anthropic was used in the US military’s operation to kidnap Venezuelan President Nicolas Maduro. The incident is a high-profile example of how the US Department of Defense is using AI technology in its operations.

Claude AI Footprint at Bombing Caracas Operation

The operation, which involved a massive bombing of the Venezuelan capital, Caracas, killed 83 people, according to the Venezuelan Defense Ministry. The terms of use of Anthropic’s Claude explicitly prohibit its use for violent purposes, weapons development, or surveillance.

Anthropic is the first AI developer to have its tools used in a classified operation by the US Department of Defense. However, the exact details of how the Claude, which has capabilities ranging from processing PDF files to guiding autonomous drones, are still unclear.

A spokesperson for Anthropic declined to comment directly on the use of the Claude in the operation, but stressed that any use of the tool must be consistent with the company’s usage policies. The U.S. Department of Defense has also remained silent on the allegations.

The Wall Street Journal reported, citing unnamed sources, that the Claude was made available to the military through Anthropic’s partnership with Palantir Technologies, a contractor for the Department of Defense and federal law enforcement agencies. Palantir also declined to comment on the matter.

The incident comes as militaries around the world, including the United States, are increasingly relying on artificial intelligence in their arsenals. For example, the Israeli army has used drones equipped with autonomous capabilities in Gaza and has used artificial intelligence to fill its target bank. The U.S. military has also used artificial intelligence to target attacks in Iraq and Syria in recent years.

Critics have warned against the use of AI in weapons technology and autonomous weapons systems. They point to targeting errors caused by computer decision-making that could lead to the deaths of innocent people. AI companies face the challenge of how their technologies will interact with the defense sector.

Dario Amodei, CEO of Anthropic, has stressed the need for rules and regulations to prevent harm from AI applications and has expressed concerns about its use in lethal autonomous operations and surveillance in the United States.

This cautious approach has apparently upset the US Department of War. US Secretary of War Pete Hegseth announced in January that the department would not use AI models that do not allow for combat. The Pentagon announced a partnership with Elon Musk’s company xAI that same month. In addition, the Department of War uses custom versions of Google Gemini AI and OpenAI systems to support research.

As tensions between AI innovations and military applications grow, there is a growing need for stronger ethical and legal frameworks for the use of such technologies, as the disclosure of such information could exacerbate global debates about the role of AI in armed conflict.

your comment