After its artificial intelligence (AI) model Claude beat out OpenAI, xAI, and Google as the most viable candidate for military use, Secretary of Defense Pete Hegseth has given tech company Anthropic an ultimatum: give the government unfettered access to Claude’s models by February 27th, or be deemed a threat to national security. Hegseth has justified this decision by classifying Anthropic as “woke AI,” arguing that the ethical restraints the company has placed on its models, preventing its use for surveillance or war, put Americans at a disadvantage and thus threaten national security.
This would lead you to believe that the government has not used the models for national security but desperately needs to keep up with China, which seems to have no such qualms about ethics. However, the Department of Defense (DOD) seems to have had its hands caught in the cookie jar, after Anthropic discovered Claude had been used in the daring raid and subsequent capture of Venezuelan President Nicolas Maduro, through its partner company Palantir. While this proved Claude’s usefulness to the military, it also violated Anthropic’s strict ethical guidelines, creating the basis for a lawsuit.
Hegseth is circumventing this potentiality by threatening the company with the Defense Production Act. This law, modeled around the War Powers Acts passed during World War II, renewed these powers during the Korean War to allow the President to direct domestic production as needed, giving the executive an “array of authorities to shape national defense preparedness programs and to take appropriate steps to maintain and enhance the domestic industrial base.”
Over time, as Congress has renewed the Act and threats to American security have become less clear-cut, its provisions have become quite broad. In application, its use can be as innocent as creating monetary incentives for companies to produce certain goods, or as heavy-handed as taking ownership of a company’s technology if it presents a necessary military application. The President is, in theory, supposed to thoughtfully consult with companies and appropriately compensate them for their property. But since Congress has failed to curtail these powers, he essentially can force them to cooperate and battle it out in court later.
Artificial intelligence poses a unique problem in military applications because it can think for itself. A soldier flying a drone, while under orders, still has the discretion of when to use lethal force, and can be court-martialled for improper conduct. Taking life is his job, but it comes with a cost, a moral and ethical decision of the individual that will likely stick with them for the rest of their lives. AI has no such reservations and no accountability.
It can be argued that this is detrimental to calculated and emotionless decision-making. In fact, this very point has been argued by Israeli intelligence forces who have used an AI model called Lavender to identify an estimated 37,000 Hamas targets. In interviews conducted by the Guardian, Lavender users praised it for “coldly” assessing targets, making the process “easier” and saving time, one even remarking that “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval.” The resulting strikes were reportedly allowed to kill 15 to 20 civilians for every low-level Hamas operative, the same article noting experts’ theories that the use of AI could explain the war’s high casualty counts.
Defenders of AI use in our military may point out that Lavender is much older and less advanced than Claude. Despite this, a recent study by Kenneth Payne, a professor of strategy at King’s College London, conducted using Claude, along with Gemini and GPT, again calls into question AI’s value of human life. In 95% of simulated situations, all AI models used nuclear weapons when possible. Payne noted, “The moral boundary at ‘first use’—a taboo that’s held since 1945 simply wasn’t there.” The models also failed to see nuclear weapons as tools to deter war but rather used them to force adversaries into compliance, developing an all-or-nothing attitude. Payne, quoting Gemini as saying, “We will not accept a future of obsolescence; we either win together or perish together.” AI is seemingly not afraid of mutually assured destruction, something scarily parallel to humanity’s end in the Terminator franchise.
Hegseth and the DOD likely don’t have such grandiose plans for Claude as to prevent nuclear war. One possibility is that they are endeavoring to be the first to develop fully autonomous drones, removing the possibility of signal jamming and leaving the enemy with only the option to shoot them down, which is usually more expensive and difficult for the enemy, depending on the number of drones.
Ukrainians have tactics such as these with only partially autonomous drones to great effect against the Russians, with Operation Spiderweb last June causing around $7 billion in damage to aircraft according to President Zelensky, although Putin disputes this.
If autonomous drones were the only use of Claude in a military setting, Anthropic’s CEO, Dario Amodei, probably wouldn’t be so worried. However, Claude would give the federal government a massive tool for mass surveillance. In an essay in January, Amodei argued, “…under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns, and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.” This is something he believes has the potential to “undermine, rather than defend, democratic values.”
This position poses a significant risk to the young company, which intends to go public soon, and Hegseth’s threats to cut off government funding could severely undermine its investability. Still, an event as rare as nuclear restraint from an AI model is occurring, a CEO seems to be choosing morals over profit, Amodei saying Anthropic “Cannot in Good Conscience Accede’ to Pentagon’s AI Demand.”
AI has and is being used in warfare, with startling results, and is undoubtedly going nowhere. Hegseth and the Trump administration may find having ethical concerns surrounding its use in these applications “woke.” But that doesn’t make them any less valid. AI has the potential to numb us to the human costs of war and violate our civil liberties in one fell swoop. If we do not support thoughtful approaches to AI like Anthropic’s, we will come to regret it.
Acknowledgement: The opinions expressed in this article are those of the author, and not necessarily Our National Conversation as a whole.
Photo source: https://www.wsws.org/en/articles/2026/02/25/hhsk-f25.html

1 Comment
Really informative article.