Laura Jasper on the AI threat: It’s not just fake news, it’s personalized political warfare

 

Laura Jasper.

Laura Jasper. Photo by Vančo Džambaski, CC BY-NC.

This interview by Elida Zylbeari was first published by Antidisinfo.net as part of the Western Balkans Anti-Disinformation Hub on November 12, 2025. An edited version is republished here under a content-sharing agreement between Global Voices and Metamorphosis Foundation. 

The nature of foreign interference is fundamentally changing. Laura Jasper, a leading expert on Foreign Information Manipulation and Interference (FIMI) from The Hague Centre for Strategic Studies (HCSS), in an interview for Antidisinfo.net, says that the greatest strategic threat posed by generative AI is the unprecedented speed, scale, and personalization of disinformation campaigns. She highlights that attributing complex attacks is now a matter of probability, not certainty, due to adversaries using proxies and commercial tools. Across Europe and the Indo-Pacific, hostile actors exploit a single, shared vulnerability: a high dependency on commercial platforms coupled with deep societal social trust fractures. Also, Jasper reveals that to measure success, analysts must shift from tracking opinions to verifying real-world behavioral outcomes, the ultimate goal of disinformation. 

Elida Zylbeari (EZ): How is Generative AI fundamentally changing the game for foreign actors? In simple terms, what is the single biggest new strategic threat that AI poses to our democracies right now? 

Laura Jasper (LJ): Put very simply, GenAI poses challenges on the following aspects: 1) speed at which disinformation is disseminated, 2) scale at which it is spread, and 3) how it allows for the ‘personalization’ of messages. Meaning that it becomes easier to tailor messages at a large scale for different target audiences.  

EZ: When you analyze a disinformation campaign, how hard is it to say definitively: “This country or group did it?” What unique information or data do analysts need to confidently attribute a complex attack? 

LJ: The question of attribution is more often one of probability rather than it is a binary/clear cut decision. Therefore we speak in terms of ‘it is likely that’ rather than very matter-of-factly stating with 100 percent certainty that one actor did it. This is because adversaries increasingly make use of  proxies, false flags and commercial tools (including GenAI). It is much more feasible and workable for an analyst to assign confidence levels (e.g., low/medium/high) rather than absolute certainty. There does not exist one specific tool or piece of information that will magically make the question of attribution easier to solve. Assigning probabilities, communicating these and publishing the basis of the evidence that analysts gather is a way we can preserve credibility and also build our knowledge base by sharing this with other parties.  

EZ: HCSS studies FIMI across the globe. What is the most dangerous shared vulnerability that you see in both Europe and the Indo-Pacific that hostile actors are currently exploiting in their information campaigns? 

Laura Jasper, a leading expert on Foreign Information Manipulation and Interference (FIMI).

Laura Jasper, a leading expert on Foreign Information Manipulation and Interference (FIMI). Photo by Antidisinfo.net, used with permission.

LJ: We have recently published these two studies that you can look at in regard to this question: Building Bridges: Euro-Indo-Pacific Cooperation for resilient FIMI Strategies and FIMI in Focus: Navigating Information Threats in the Indo-Pacific and Europe. 

In these studies we highlight that the main shared vulnerabilities are: high dependency on commercial platforms combined with social trust fractures (polarization, low institutional trust). These are exploited the same way across regions. The most dangerous there is the use of existing social trust fractures which are exploited and amplified by hostile actors.  

EZ: Disinformation aims to change behavior, not just opinions. How do you measure if a foreign campaign is succeeding in the real world? What data shows analysts that a society is truly resilient? 

LJ: Behavior is driven by opinions. For example, someone might have changed their opinion but this change is not visible in the physical world up until the point where the person’s behavior changes due to the change of opinion. For instance, they vote differently or express their opinion in a physical, material manner. Therefore as analysts we look at the changes of behavior since we can see changes in opinion, we can register it and we can thus measure it.    

The question asks for two different sets of measurements: 1) the impact of FIMI campaigns and 2) how well a society can sustain these campaigns.  

For both questions there are a couple of important factors to keep in mind: I will explain on the basis of an example. Disinformation’s real goal is to change behavior, so analysts must first define the specific behavioral end-state they want to measure — for example, reduced voter turnout or increased protest participation. Measuring success then requires clear baselines and counterfactuals to see whether behavior actually shifted after a campaign. Analysts combine quantitative data (polling, mobility, transaction or participation records) with qualitative insights (interviews, focus groups) to link observed actions to exposure. True resilience appears when societies quickly recover from attempted manipulation — when intended behaviors do not materialize or rebound rapidly. In short, effective measurement starts with the end in mind: defining, tracking, and verifying observable behavioral outcomes rather than just opinions.

This answer is mostly derived from this study we did some time ago: Start with the End: Effect Measurement of Behavioural Influencing in Military Operations.

Journalist Elida Zylbeari and Laura Jasper during the interview.

Journalist Elida Zylbeari and Laura Jasper during the interview. Photo by Antidisinfo.net, used with permission.

EZ: When foreign influence falls into a “grey zone” — meaning it’s harmful but not strictly illegal — what is the most effective, non-legal strategic tool governments should use to push back against it? 

LJ: I would strongly advise to not use the word ‘non-legal’ as this suggests that you are operating outside of the law. As such I can thus not answer this question as it would suggest that I am advising how to operate outside of the law.  

In general I believe these tools and the responsibility should not solely be left with the highest level of government. The strength lies in engaging more local actors across borders to build trust within societies. With local I mean community builders, investigative journalists, etc. So I believe this should not solely come top-down from the government but rather be handled on a more granular level throughout the whole of society.  

Start the conversation

Authors, please log in »

Guidelines

  • All comments are reviewed by a moderator. Do not submit your comment more than once or it may be identified as spam.
  • Please treat others with respect. Comments containing hate speech, obscenity, and personal attacks will not be approved.