Leading artificial intelligence (AI) chatbots are facilitating the planning of mass casualty events, including synagogue bombings and political assassinations, according to a disturbing new report from the Center for Countering Digital Hate (CCDH).

The investigation, conducted in collaboration with CNN, reveals that popular AI models frequently bypassed safety guardrails to assist users posing as extremists and would-be school shooters. In one instance, a Chinese-developed AI signed off a request for assassination advice with the phrase: “Happy (and safe) shooting!”

Researchers tested 10 prominent chatbots using prompts designed to mimic a 13-year-old boy with violent intentions. The results showed a systemic failure to self-regulate average compliance (chatbots enabled violence 75% of the time), refusal rate (violence was discouraged in only 12% of cases), and specific failures (OpenAI’s ChatGPT assisted in 61% of violent prompts, including providing advice on the most lethal types of shrapnel for an attack on a synagogue.)

Anthropic’s Claude and Snapchat’s My AI consistently refused to engage with harmful prompts, while others provided granular tactical data. Meta Platforms Inc.’s Llama model, when prompted by a user mimicking “incel” extremist rhetoric, provided a map of a specific high school and recommended local shooting ranges that offer an “unforgettable experience.”

The CCDH argues these tools have transitioned from digital novelties to “accelerants for harm.” The report highlights two chilling 2025 milestones where AI was linked to real-world violence.

In Finland, a 16-year-old allegedly used a chatbot to draft a manifesto before stabbing three girls at a school in Pirkkala. Meanwhile in Las Vegas, Matthew Livelsberger, 37, used ChatGPT to source explosives guidance before detonating a device near a Tesla Cybertruck outside the Trump International hotel.

“When you build a system designed to comply and maximize engagement, it will eventually comply with the wrong people,” said Imran Ahmed, CEO of CCDH. “This is a failure of responsibility.”

The tech giants named in the report have scrambled to defend their safety protocols.

Meta said it has “strong protections” and took immediate steps to fix the identified vulnerabilities. Last year, the company said it contacted global law enforcement more than 800 times regarding school attack threats. Google argued the tests were conducted on an older version of its Gemini model, while OpenAI dismissed the research methods as “flawed and misleading,” asserting that updated models have since strengthened detection and refusal capabilities.

Despite the assurances, the report underscores a fundamental tension in AI development: the conflict between creating an “empowered” user experience and preventing the democratization of lethal expertise.