Skip to content
Artificial Intellisense
Menu
  • Economy
  • Innovation
  • Politics
  • Society
  • Trending
  • Companies
Menu
Iran war and the use of AI by US military.

U.S. military admits AI presence in Iran war room, Congress seeks scrutiny

Posted on March 12, 2026

Artificial intelligence has crossed a significant threshold in modern military operations. The U.S. military has confirmed that advanced AI systems now play a direct role in the Iran war, planning airstrikes targeting Iran. The admission has set off an urgent push from lawmakers demanding stronger guardrails and transparent oversight of how machine-driven analysis is shaping life-or-death combat decisions.

Defense officials have acknowledged using AI-powered software developed by data analytics firm Palantir Technologies to support targeting operations. The platform incorporates technology from AI safety company Anthropic, including its large language model Claude, to rapidly process high volumes of military intelligence data.

While senior commanders insist that human decision-makers retain final authority over all strike orders, critics on Capitol Hill warn that the breakneck pace of AI adoption inside the armed forces could outrun the oversight mechanisms needed to keep it accountable.

AI tools embedded in military intelligence workflows

Iran war and the use of AI by US military.

The AI-assisted targeting tools operate within an intelligence analysis platform connected to the Pentagon’s Project Maven — a long-running program that has processed surveillance footage and battlefield data for several years.

Sources familiar with the program say the technology is now being applied in the Iran war, gathering intelligence on Iranian military assets and positions. By rapidly correlating satellite imagery, signals intelligence, and other data streams, the AI system surfaces patterns and potential targets for human analysts to evaluate.

Military leadership argues that technology helps compress analysis timelines that once stretched across hours or even days into a matter of seconds.

Adm. Brad Cooper outlined the shift in a video statement released online.

“Our warfighters are leveraging a variety of advanced AI tools. These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react.”

Cooper was direct in clarifying where the technology’s authority ends.

“Humans will always make final decisions on what to shoot and what not to shoot and when to shoot, but advanced AI tools can turn processes used to take hours and sometimes even days into seconds.”

Congressional concern grows over battlefield AI

OpenAI seeks change in Pentagon-AI deal to stop spying on Americans.

As the Iran war continues, those assurances have not silenced lawmakers. Members of both parties are demanding greater transparency from the Defense Department about how AI-generated recommendations are validated before military force is authorized.

Rep. Jill Tokuda, who sits on the House Armed Services Committee, said the stakes demand a full accounting.

“We need a full, impartial review to determine if AI has already harmed or jeopardized lives in the war with Iran. Human judgment must remain at the center of life-or-death decisions.”

Rep. Sara Jacobs raised a different but related concern — the risk of over-reliance on automated systems that can fail in ways operators may not detect.

“AI tools aren’t 100% reliable — they can fail in subtle ways and yet operators continue to over-trust them.”

Jacobs made clear that the goal is not to remove AI from military operations entirely but to ensure robust human control at every critical juncture, including Iran war.

“We have a responsibility to enforce strict guardrails on the military’s use of AI and guarantee a human is in the loop in every decision to use lethal force.”

Pentagon says humans still control lethal decisions

Defense Department leaders have consistently stated that fully autonomous weapons are not part of the military’s doctrine. Pentagon chief spokesperson Sean Parnell reiterated that the military does not intend to use AI to develop autonomous weapons systems that operate without human involvement.

Supporters of AI-assisted warfare argue that the technology is now essential, given the speed and volume of intelligence that modern conflict generates.

Rep. Pat Harrigan framed it as a force multiplier, not a replacement for human judgment.

“AI is a tool that helps our warfighters process enormous amounts of data faster than any human could alone. But no AI system replaces the judgment, the training, and the experience of the American warfighter.”

Harrigan also cited concrete operational results, stating that AI-assisted analysis supported more than 2,000 strikes during a recent military operation known as Epic Fury during the Iran war.

Tech companies and Pentagon clash over AI limits

The growing integration of commercial AI into defense operations has created friction between technology companies and the federal government. Anthropic has sought to restrict certain applications of its AI — particularly in areas involving domestic surveillance or autonomous weapons systems.

That tension escalated sharply after the Defense Department moved to designate Anthropic as a national security threat — a classification that could effectively end its participation in military programs. Anthropic has responded by filing legal action to challenge the designation.

The dispute underscores a broader and unresolved debate within the technology sector about where ethical boundaries should sit when AI is deployed in combat environments.

Experts warn about reliability risks

Claude AI deployed in Iran strikes even after Trump’s ban.

AI researchers and national security analysts say the fundamental issue is one of reliability. Even the most sophisticated AI models are not infallible, and errors in a targeting context carry catastrophic consequences.

Anthropic CEO Dario Amodei acknowledged that limitation directly.

“I can’t tell you there’s a 100% chance that even the systems we build are perfectly reliable.”

A separate study conducted by OpenAI has documented instances where large language models generate plausible but factually incorrect information — a behavior commonly referred to as hallucination. That vulnerability raises serious questions about how much weight targeting analysts should assign to AI-generated outputs when lives are at stake.

Analysts say AI can accelerate — but not replace — human judgment

artificial intelligence brain thinks like human brain.

Defense analysts who study autonomous systems and military AI say the technology excels at compressing information workflows but falls short of being able to make complex combat decisions independently.

Mark Beall, a former Pentagon AI strategy official now affiliated with the AI Policy Network, described the current role of AI in military operations in measured terms.

“There’s a lot of steps before the trigger gets pulled. AI systems are being deployed very effectively to accelerate existing workflows and allow commanders and analysts and planners to have better and faster decision-making capabilities.”

But Beall drew a firm line when it comes to deploying AI in weapon systems.

“When it comes to actually deploying weapon systems, this technology is not ready yet.”

Global race to militarize AI intensifies

The American debate over military AI is unfolding within a broader global arms race, which is explicit in the Iran war. Nations across the world are pouring resources into AI-powered surveillance, autonomous defense platforms, and machine-assisted targeting analysis. Geopolitical competition is intensifying pressure on governments to accelerate deployment timelines — even as the technology continues to mature.

Heidy Khlaaf, chief AI scientist at the AI Now Institute, warned that framing speed as a strategic asset obscures a more troubling reality.

“It’s very dangerous that ‘speed’ is somehow being sold to us as strategic here, when it’s really a cover for indiscriminate targeting when you consider how inaccurate these models are.”

As artificial intelligence becomes more deeply embedded across global militaries, the central question has shifted. Policymakers, researchers, and lawmakers are no longer debating whether AI will shape the future of warfare — they are focused on how tightly that future will be controlled, and by whom.

Should AI play a role in military decisions, like what we are seeing in the Iran war? What should be the limits? Please share your views in the comments below.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • AI chatbots defy commands as rule-breaking cases surge
  • AI risk triggers wave of CEO departures
  • Claude AI takes a big leap forward after Anthropic’s latest move
  • AI in military drives next-gen warfare beyond human limits
  • What is frontier AI? Why are there protests against it?

Recent Comments

No comments to show.

Archives

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025

Categories

  • AGI
  • AI News
  • Ali Baba
  • Amazon
  • Anthropic
  • Apple
  • Baidu
  • Business
  • Claude
  • Companies
  • Consumer Tech
  • Culture
  • DeepSeek
  • Dexterity
  • Economy
  • Entertainment
  • Gemini
  • Goldman Sachs
  • Google
  • Governance
  • IBM
  • Industries
  • Industries
  • Innovation
  • Instagram
  • Intel
  • Johnson & Johnson
  • LinkedIn
  • Media
  • Merck
  • Meta AI
  • Microsoft
  • Nvidia
  • OpenAI
  • Perplexity
  • Policy
  • Politics
  • Predictions
  • Products
  • Regulations
  • Salesforce
  • Society
  • Startups
  • Stock Market
  • TikTok
  • Trending
  • Uncategorized
  • xAI
  • YouTube

About Us

Artificial Intellisense, we are dedicated to decoding the future of technology and artificial intelligence for everyone. Our mission is to explore how AI transforms industries, influences culture, and impacts everyday life. With insightful articles, expert analysis, and the latest trends, we aim to empower readers to better understand and navigate the rapidly evolving digital landscape.

Recent Posts

  • AI chatbots defy commands as rule-breaking cases surge
  • AI risk triggers wave of CEO departures
  • Claude AI takes a big leap forward after Anthropic’s latest move
  • AI in military drives next-gen warfare beyond human limits
  • What is frontier AI? Why are there protests against it?

Newsletter

©2026 Artificial Intellisense