The use of AI in the U.S. military and paramilitary operations is under increasing scrutiny as a “gung ho” approach to the technology embeds itself into these organizations. The increasing militarization of AI, however, is creating pushback from within the AI industry as questions arise about its unfettered use by the Department of Defense (DoD) and Immigration and Customs Enforcement (ICE), which has become Homeland Security’s principal user of AI in its pursuit of deportation of illegal aliens.

The most dramatic standoff between DoD (aka the War Department) and AI developers is the increasingly public spat with Anthropic. As documented by Reuters, Anthropic (along with Google, OpenAI, and Elon Musk’s Grok AI) is a principal AI supplier to the DoD. Anthropic reportedly is worried about how its AI may be used for autonomous targeting and domestic surveillance. Anthropic also is concerned its AI may be used in ways that are contrary to international law.

Meanwhile, Defense Secretary Pete Hegseth has railed against AI models “that won’t allow you to fight wars,” comments aimed specifically at Anthropic in the eyes of most observers. Hegseth prefers AI models that are free from ideological constraints that limit lawful military applications. 

Hegseth’s mantra is: “The future of American warfare is here and it’s spelled AI.” A clue to just how big a role AI will play in warfare comes from a just-announced Prometheon AI that promises to go beyond situational awareness to predict future developments on the battlefield, giving commanders the ability to anticipate developments before they actually occur. Aerodata’s Prometheon system is slated for the German military, reports NextGenDefense.

What the lawful use of AI is largely up to the Pentagon to decide. Anthropic doesn’t like the idea that the Pentagon may ask it to tailor its AI to whatever definition of “lawful use” is currently in vogue, requests that may violate company guidelines.  Critics also note that the Pentagon has dramatically reduced funding and staff for testing programs related to AI safety.

Elon Musk’s Grok AI is stepping up to fill that ethics-free role. Hegseth is plugging Grok AI into the Pentagon’s classified systems, a move that confers most-favored AI status on Grok even as it faces formal inquiries in Europe over the dissemination of child pornography and algorithm manipulation.

Hegseth is fast-tracking Grok AI for DoD use but DoD’s overall AI integration strategy comes with a caveat. While the federal government has historically taken the lead in tech innovation, Hegseth says DoD will essentially become a follower of AI innovation by private industry with an eye toward “dual-use” developments that can be slated for swift integration into DoD applications. Helping to move things along is a newly constituted Science, Technology, and Innovation Board (STIB) that scraps the decades-old Defense Innovation Board and Defense Science Board. The move is seen as a way to consolidate AI initiatives of the past few years under a single accelerating heading. What level of transparency STIB decisions will have is unknown.

Operational AI guidelines also are the core issue regarding the technology’s use by ICE. Homeland Security reports that ICE and Border Patrol agents are the prime users of AI. Among the AI tools used by ICE is system devised by Denver-based Palantir that processes and grades tips received by the agency. Another Palantir tool called Enhanced Leads Identification & Targeting for Enforcement (ELITE) reportedly creates dossiers on potential deportation targets. The Palantir AI tools are collectively operated under what the company calls Immigration OS developed for the Trump administration under a $30 million contract.

Also in the AI tool belt is facial recognition software developed by Clearview AI as well as a portable app version called MobileFortify developed by NEC. Homeland Security says it and affiliated agencies are deploying more than 100 AIs to be used in a variety of capacities.

Adding to the information mix are sources ranging from expanded use of physical tools like license plate readers. ICE is also reportedly interested in data from online advertising brokers that would include browser location data, browser history and purchasing information.

Taken together, it all amounts to a digital dragnet with AI as a central enabler.

The big concern is that instead of evidence, AI probability data of questionable reliability due to possible algorithmic bias becomes the main driver of enforcement actions. Critics say that these AI tools may be partially responsible for mistaken ICE and Border Patrol arrests by not-very-tech-savvy field agents more concerned with meeting quotas than confirming accuracy.   

The American military also is focused on individual operator use of AI. A new Raft AI Mission System allows an operator to train and adapt AI models on the battlefield using natural language interaction.

ICE’s use of what some describe as military-grade AI, however, is prompting companies to distance themselves from the agency as public outrage against ICE increases. French tech giant Capgemini quickly announced the selloff of its American subsidiary on February 1 in the face of public outrage in France over its sourcing of software used to find individuals wanted by ICE. Capgemini’s parent company implied that its American subsidiary had essentially gone rogue contracting with ICE so it was dropped like a stale croissant.

On a grassroots level, tech activists are protesting ICE actions with a “Resist and Unsubscribe” boycott aimed at Amazon, Apple, Meta, Google, Microsoft, OpenAI, Netflix, Paramount+, Uber and X. Both the boycott and the Capgemini selloff suggest that tech activism may be set for a comeback, fueled by the fear that authoritarianism grows on the back of AI technology used by what Harvard University researcher Anita Chan warns is “an immigrant and deportation machine” whose use will not end with a focus on immigrants alone.