AI News

Automatically collected by AI

Pentagon’s A.I. Drive Faces Resistance

A Pentagon Push Into A.I. Meets Resistance in Court, in the Field and Inside Its Own Ranks

The Pentagon’s drive to weave artificial intelligence into nearly every layer of military work is accelerating, even as it collides with lawsuits, ethical alarms and the practical realities of using chatbots in war.

In a span of days, three developments captured the promise and peril of that effort. A federal appeals court in Washington allowed the Defense Department to keep treating Anthropic, the maker of Claude, as a supply-chain risk while a legal fight plays out over whether the military can demand unrestricted use of a commercial A.I. model. At the same time, the U.S. Army was reported to be building its own battlefield-oriented chatbot, trained on military data to answer soldiers’ operational questions. And scrutiny intensified around Emil Michael, the senior Pentagon official overseeing A.I. initiatives, after disclosures showed he had sold xAI holdings for a gain of up to $24 million after the department announced agreements with Elon Musk’s company.

Taken together, the episodes illustrate a broader shift now underway in Washington: the Defense Department is no longer merely experimenting with generative A.I. tools. It is trying to make them routine infrastructure — for planning, logistics, analysis and potentially combat operations — before the legal and ethical rules governing their use are settled.

The Anthropic Fight Tests the Limits of Military Access

The clash with Anthropic has become a closely watched test of how much control A.I. companies can retain once the military comes calling.

At the center of the dispute is Anthropic’s effort to preserve limits on how Claude can be used, particularly for fully autonomous weapons and surveillance. The Pentagon, by contrast, has pressed for permission to use frontier A.I. systems for any lawful military purpose, a position that reflects its growing insistence that commercial technology providers not dictate battlefield constraints.

On April 9, a federal appeals court in Washington declined to stop the Pentagon from continuing to treat Anthropic as a supply-chain risk while the case proceeds. That ruling cut against a temporary order issued by a federal judge in San Francisco on March 26 that had barred the designation; the administration later removed the label in that proceeding. The conflicting rulings have left uncertainty over how, and under what terms, Anthropic may do business with the military in the months ahead.

The stakes extend well beyond one company. If the government ultimately prevails, it could strengthen the Pentagon’s hand in insisting that major A.I. developers accept broad defense uses of their models. If Anthropic succeeds, it may reinforce the idea that private firms can impose red lines on military applications, even when their tools are becoming central to national security work.

Another hearing in the Washington case is scheduled for May 19, leaving unresolved one of the most consequential questions in defense technology: whether the government or the model maker gets the final say over the uses of powerful general-purpose A.I.

The Army’s Answer: Build a Combat Chatbot of Its Own

If the Anthropic case shows the friction involved in adapting commercial A.I. to military needs, the Army’s chatbot project points to a parallel strategy: build systems tailored from the start for war.

According to reports, the Army is developing an internal chatbot known as Victor, or VictorBot, designed to answer mission-critical questions from soldiers using real military data. Unlike consumer chatbots repurposed for official use, the Army’s system is said to be grounded in military-generated information and structured to provide source-cited answers — an effort to make it more reliable in operational settings where a fabricated answer could carry serious consequences.

That approach aligns with a broader Pentagon campaign to expand A.I. across classified and unclassified networks. In January, Defense Secretary Pete Hegseth said Grok, xAI’s chatbot, would be added to Pentagon systems alongside Google tools, part of a wider push to put leading models across the department.

But the Army’s effort is notable for another reason: it suggests the military is not content to rely solely on commercial off-the-shelf chatbots, particularly for battlefield use. Training a model on military data could make it more useful for mission planning, intelligence retrieval and procedural questions. It could also reduce dependence on outside companies whose usage policies, export restrictions or model updates may not align with military needs.

Still, the hardest questions remain unanswered. It is not yet clear when Victor will be fielded, how broadly it will be used, or how it will perform under the ambiguity and stress of combat conditions. Nor is it clear what safeguards will govern when soldiers can rely on it and when human review will be mandatory.

Those questions matter because the military use of generative A.I. differs from office automation in a basic way: in war, speed is valuable, but error can be catastrophic. A chatbot that confidently delivers the wrong map reference, misstates a rule of engagement or mangles an intelligence summary is not merely inconvenient. It can shape life-and-death decisions.

Ethics Questions Around xAI Reach the Pentagon’s A.I. Leadership

The Pentagon’s embrace of A.I. has also raised questions about who is guiding that transformation and under what incentives.

New ethics disclosures have drawn attention to Emil Michael, the under secretary of defense for research and engineering, who oversees the department’s A.I. efforts and has been pushing for wider and faster adoption of the technology across the military. Records released this month showed that he sold a private investment in xAI earlier this year for a gain that could be as high as $24 million.

The timing has prompted scrutiny because the Defense Department had announced agreements with xAI in 2025, and Michael’s role includes negotiating with A.I. companies and shaping the Pentagon’s technology agenda. Ethics experts have pointed to federal conflict-of-interest rules that bar officials from participating in matters that could affect their own financial interests. The Pentagon has denied wrongdoing.

Even absent a legal violation, the disclosures underscore a central tension in the government’s race to adopt A.I.: the same small circle of investors, executives and policymakers often moves between Silicon Valley and Washington, blurring the line between public mission and private gain.

That overlap has become more significant as defense demand for advanced A.I. systems grows. Contracts, pilot programs and access agreements can rapidly alter the fortunes of companies competing to supply the government with large language models and related tools. In that environment, decisions about procurement, access and preferred vendors can carry enormous financial consequences.

Why This Moment Matters

For years, the Pentagon’s A.I. agenda was framed largely in terms of future competition with China and the need to modernize an aging bureaucracy. What is changing now is the immediacy. The department is trying to operationalize frontier A.I. today — not as a distant research priority, but as software for everyday use by analysts, planners and troops.

That urgency is producing three parallel pressures.

First, legal pressure: the Anthropic case could help determine whether frontier A.I. firms can maintain meaningful limits on military applications of their models. Second, operational pressure: projects like Victor suggest the armed forces want systems built for military tasks, using military data, even if reliability and oversight standards are still emerging. Third, governance pressure: the questions surrounding xAI and Pentagon leadership highlight how fragile public trust can be when oversight appears to lag behind investment and deployment.

The Defense Department has long argued that lawful military use, not corporate preference, should define the boundaries of national security technology. A.I. companies, meanwhile, are increasingly being forced to decide whether they are software vendors, ethical gatekeepers or both.

For now, the Pentagon is moving ahead on all fronts. But as the courts weigh contractual limits, the Army tests combat chatbots and ethics questions swirl around procurement, the military’s A.I. future is being shaped not only by engineering advances, but by unresolved arguments over control, accountability and risk.

Sources

Further reading and reporting used to add context:

Leave a Reply

Your email address will not be published. Required fields are marked *