It has been over three years since OpenAI launched ChatGPT, the large language model (LLM), which is now the fastest consumer application to reach 100 million users and has become synonymous with the term “artificial intelligence” (AI). Since then, a vast array of generative AI tools have entered the market, including chatbots, image, video, and audio generation tools. Much of the research on AI and terrorism has focused on these generative applications, examining how extremist groups might use them to create propaganda, amplify narratives through bot networks, and experiment with personalized “radicalization bots”. While this body of work has illuminated its impact on the digital realm, more attention needs to be paid to how AI tools, including those under the generative umbrella, are concretely used for operational planning and use by terrorists and violent extremists. This article addresses that gap by exploring how terrorists and violent extremists have leveraged AI in the operational planning of attacks and examines what this tells us about the incentives and benefits of AI use as perceived by perpetrators.
The year 2025 has witnessed a notable rise in incidents where terrorists and violent extremists have leveraged AI tools to plan, research, and prepare attacks. Just as terrorists and violent extremists have long relied heavily on internet forums, social media platforms, and messaging applications to acquire operational knowledge and guidance to execute plots, especially among lone wolves and inspired actors, we note an uptick in perpetrators and plotters turning to freely available AI products to optimize their operational toolkit. Specifically, according to our database of plots and attacks in which AI was used for operation planning in 2025, it serves a role in learning (ranging from operational security to the details of composing explosives), visualizing scenarios (e.g., creating images of the planned attack), and refining tactics through conversational, personalized guidance (e.g., step-by-step guidance on how to acquire the necessary chemical precursors for explosives). This trend underscores the urgent need for states, technology companies, and social media platforms to anticipate and adapt to the new realities of digital-enabled extremist activity and implement strategies to disrupt misuse.
Operational Use of AI in 2025 by Terrorists and Violent Extremists
Across 2025, this pattern has surfaced in a series of concrete plots and attacks that illustrate how AI tools now sit inside the operational workflow of violent actors. In Las Vegas, the suspect behind the New Year’s Day Cybertruck bombing consulted a chatbot to research explosives and detonation methods and gather information on gun stores and private phone devices, treating a conversational AI system as an immediately available tutor for the design of his attack. Hours earlier in New Orleans, an Islamic State-inspired attacker drove a truck into pedestrians, killing 14 and injuring more than 50. The attacker had earlier used AI-enabled smart glasses equipped with a voice assistant and camera for reconnaissance in the historic French Quarter, underscoring how consumer wearables can double as reconnaissance and surveillance tools for target selection and operational planning. While it is unclear whether the New Orleans attacker used the AI functionalities of the glasses – which, for example, allows the user to ask questions about their environment via the small embedded-camera – the possibilities of enhancing reconnaissance with wearable devices are significant. For example, a would-be plotter can quickly gather information on targets as it scouts an area, inquire about security protocols, and get a sense of crowd numbers and ambiance at a particular festivity.
In California, two cases in 2025 underscore that AI is used in operational planning across the ideological spectrum, but also in cases in which no clear ideological motive has been identified by investigators. In the May 17 bombing of a Palm Springs fertility clinic, which killed the perpetrator and injured four individuals, a chatbot was extensively used in preparation for the plot. The suicidal, efilist perpetrator used it to guide him through the making of a powerful bomb using ammonium nitrate and fuel. In Los Angeles, the suspect in the Palisades Fire arson, which killed 12 people and burned down close to 7,000 structures in January, appears to have used generative AI to visualize his alleged plot. Prompting to create an image of a fire, the suspect asked an image generator to depict impoverished people struggling against a giant gate emblazoned with a dollar sign, while on the other side, the wealthiest individuals were ‘laughing’ as the world ‘burned down.’ Aside from seemingly helping him with envisioning a scenario, the suspect also seemed to have engineered his chat history on the day of the fire to look innocent, asking, “Are you at fault if a fire is lit because of your cigarettes?”
Use of AI for operational planning has also appeared in other regions and among younger offenders. In Tira, Israel, a 16-year-old charged with attempting a knife attack on Israeli police is reported to have consulted a chatbot for information on attack methods and tactics before approaching his target. In this case, the suspect who acted “in protest of the Jewish occupation” bypassed traditional barriers to operational guidance, which can be hard to find within his highly securitized environment and complicated to digest without any prior experience. AI use for operational planning thus seems to have specific benefits for non-trained lone actors with no outside assistance.
In Finland, the 16-year-old responsible for the Pirkkala school stabbing, which saw three female students injured, used AI prompts to help draft a manifesto and to structure planning for the assault – this according to his manifesto that circulated online and that he sent to a Finnish newspaper. In this case, an LLM was used directly to steer the production of the manifesto outlining his justification for the attack and to prepare the logistics. The perpetrator’s explicit call-out of ChatGPT as a guiding tool for the attack plausibly also serves to inspire others.
In Vienna, an Islamic State supporter of just 18 years old, arrested in connection with alleged attack plans against the Israeli embassy in Vienna and the Imam Ali Islamic Center in Vienna-Floridsdorf, was convicted after a court found that he had used a chatbot for “attack fantasies” and bomb-making research. He had specifically discussed the production and storage of explosives.
Taken together, these incidents show AI operating at multiple points along the operation lifecycle, from ideation and fantasy, through research and planning, to reconnaissance in the physical world. In each case, AI accelerated learning and lowered the threshold between skillset and action. This emerging pattern suggests that counterterrorism, regulation, and platform governance must now treat the malicious use of AI as a core feature of contemporary extremist activity rather than a speculative concern.
Threat Assessment and Forecast
A recurring theme across these incidents is that the AI tools used in operational planning, such as smart glasses, chatbots, and image generators, function as shortcuts in operational planning. They transform fragmented information - for example, safe storage of chemicals, the technical steps of making a bomb, the places where one can acquire precursors - into clear, digestible steps that guide perpetrators through processes such as attack tactics, explosive composition, and logistical preparation. This capability significantly reduces the cognitive and technical barriers that previously constrained inexperienced or isolated actors. It is therefore important to keep tracking the profiles of users engaging with LLMs in attack plotting - are they indeed younger and more inexperienced than those who do not? Are they indeed not or less in contact with Islamic State virtual ‘guides’ or enablers?
For would-be attackers, three factors explain why LLMs have become integral to operational planning. First, LLMs make information accessible that would otherwise be difficult to locate without prior knowledge of where to look online. While IS and IS-aligned communities online, for example, host various DIY guides on attack planning or can provide ‘guides’ for enabled plotters, they still require someone to know where to look. Second, conversations with chatbots break down lengthy, technical information guides into bite-sized, actionable instructions, effectively converting documents or procedures that may be deemed far out of the skillset of an inexperienced plotter within his or her capabilities. While a foreign fighter with combat experience or a lone actor with mechanical engineering experience may easily understand the process of drone weaponization as explained in Islamic State guides, an inexperienced lone wolf may find the conversational, step-by-step nature of LLMs particularly useful. Three, the answers of chatbots are far more personalized than any search result from traditional search engines. These three factors closely parallel the benign use of generative AI for everyday problem-solving, where users seek personalized, accessible, and precise answers, including questions that they might hesitate to ask a human.
The efficiency and immediacy of these tools appeal to a wide spectrum of actors, but they so far seem particularly transformative for lone wolves and inspired individuals who lack formal training or mentorship from experienced operatives. For these users, LLMs serve as virtual tutors, bridging gaps in expertise and accelerating the transition from intent to action more quickly.





Brilliant breakdown of how LLMs are basicaly becoming virtual tutors for lone wolf actors. The bit about chatbots translating technical bomb-making guides into bite-sized steps is kinda terrifying when I think about how I use AI for cooking recipies the same way. What's maybe underappreciated here is the psychological comfort of asking a chatbot versus googling, since theres no search history trail that feels permanent or traceable in the same way.