Defending society from influence operations
(An essay prepared originally for the Aspen Institute Congressional Program.)
“It’s central to the safety culture at BP – you hold the handrail when you walk down stairs. No question. I left the company ten years ago, but I cannot walk down a flight of stairs without holding that handrail, even now.” – Former executive, BP
Generative AI and the influence operations landscape
Influence operations are not, of course, new. They have been a mainstay of politics, intelligence, and public relations for as long as those disciplines have existed.
“As a KGB officer in India, I placed tens of thousands of news stories in the local and national press.” – Former KGB officer, speaking in London in 2023
“In 2019, I took a role with a US PR firm. My first job was to place news stories in the media suggesting that the link between tobacco and cancer was not as strong as you might think. Not saying there was no link, but seeding doubt.” – PR professional, speaking in 2025 in London
But Generative AI and shifts in the geopolitical landscape have changed the nature of the threat. The tools required to create convincing media, tailored narratives and persuasive conversations at scale were once the preserve of governments or ad agencies. Today, they are widely accessible at little cost. Messaging can be produced and adapted in real time. Instead of static propaganda, campaigns can now involve dynamic, personalised persuasion delivered through multiple channels.
The threat is to business as well as to society and democracy
The challenges this new wave of influence operations poses to society are well documented, even if not well understood or effectively addressed. Targeted and untargeted attacks from state actors seek to influence populations, undermine democracy, and affect or direct the actions of our leaders. It can impact individuals, whose voice, image, or identity is expropriated and misapplied, or who are exposed to commercial or other influences through synthetic engagement.
The threat is also being felt by businesses. Influence operations make it harder to understand the world in which companies operate. They make investment decisions harder to judge, geopolitical threats with commercial impacts harder to evaluate, and consumer opinion harder to predict and understand. How reliable is a due diligence check on a prominent business or individual when influence operations are at play? What value a customer review if written at scale by AI?
“My company works for sovereign wealth funds. We take actions that improve the perception of our clients and of their businesses, online and offline. When we succeed, you won’t know we created your opinion.” – Former British Army influence operations officer
Businesses are also targets of influence operations for political reasons, for profit, or driven by activism or malice.
“After October 7, our company was suddenly the subject of loads of online stories saying we had supported a hard-line Israeli settler movement. We have almost no ties to Israel, so it was very strange, but our brand got attacked hugely online.” – Senior executive of an international food and drink brand
Detection is hard, and the threat to business is under-reported
It can be difficult to identify an influence operation, especially if it is well constructed and effective. It can operate on open social media, in closed online communities, in the mainstream press, and through in-person conversations and events. An online operation might include personalised direct messaging with individuals which never appear in public, conducted entirely by AI.
It is also under-reported. Consultants and crisis advisers report (in private) numerous instances of influence operations targeting businesses and brands, but there is little shared in public. There are very few reliable case studies, and therefore almost no community response in a commercial context. The World Economic Forum notes that ‘the scale and speed of disinformation have become a significant economic threat’ but can point to only a few public commercial case studies.
Technical detection is likely to be part of the solution, and tools which seek to identify and address online influence operations (including deep fakes and coordinated inauthentic content production and sharing) are in an arms race with the technology which produces the synthetic content. While necessary and effective, they are therefore – as yet – imperfect and not a complete solution.
“We’ve got various detection systems in place, but about 50% of the [influence operations] we act on come from people spotting something weird.” – Comms specialist at a luxury consumer brand
The growth in the market for technical detection of influence operations targeted at the private sector (and the heightened interest in the issue from venture capital) is a good proxy measure for the increasing threat. While companies may not wish to talk openly about an influence operation which has focused on a sensitive grain of truth or caused reputational damage, they will invest to prevent it happening again.
A regulatory approach is proving challenging
Regulation could be part of the solution, but it is complex. It cannot effectively be addressed at either those carrying out the influence operations – as they are necessarily covert and already understand their behaviour is transgressive – or the people whose opinions are affected by influence operations, for obvious reasons.
Addressing regulation at the platforms used to spread online influence operations is problematic for reasons of free speech and the familiar debate over whether platforms are publishers or distributors, which we need not rehearse here.
Influence operations are a business threat with cyber security parallels
If regulation is an ineffective tool to address the threat to business from influence operations, we must consider whether companies will act to defend themselves out of self-interest rather than compulsion.
There are parallels here with cyber security. Twenty years ago, cyber was a subject poorly understood, with limited ability to affect the operations of a business, and which had little place at the executive table. Since then, it has become a subject of regular discussion by senior teams, who review and accept or reject cyber risks based on business context. There are requirements laid on business around cyber security through codes of practice, the SEC, standards bodies, insurers, and others.
For many, though, it is only when they experience a cyber attack or they hear about one from a peer that it becomes real, and the focus shifts from minimal efforts at compliance to genuine and committed preparation. As more businesses are affected, more people have their awareness raised in this way, and the community standards of behaviour rise.
This includes awareness training and phishing testing
Awareness of the severity of the issue is raised throughout a workforce through corporate awareness raising initiatives. For cyber, this includes the full range of internal comms tools, and activities like phishing training, in which employees must attempt to identify a cyber attack hidden in an apparently innocuous email.
Phishing training can be of mixed quality and effectiveness. But when it is done well, it helps people understand what hackers might seek to do, why they might do it, and what it looks like when they do. Critically, it encourages the recipients of an email to stop and think whether the communication in front of them is asking them to set aside their previous assumptions. As AI makes phishing attacks harder to spot, this context-based caution is an increasingly important approach.
The rules say I need to follow a strict procedure before paying any invoice, but this email from the CEO is telling me to ignore the rules and make a hurried payment, using a new system. Why would they do that?
Over time, this raises the cognitive defences of the organisation, making caution and scepticism about the provenance, motive, and impact of messages second nature.
Influence operations training – motive, means, and message
There are clear parallels here to influence operations, and training staff to recognise the threat they pose to businesses raises cognitive defences. [Declaration of interest: Vantix Partners provides training like this. I believe we sell it because it’s important. Readers should consider that I might believe it’s important because we sell it.]
To be successful, this training must separate the message of an influence operation from the means and the motive. The message is hard to tackle. Not only because of concerns around maintaining freedom of speech, but also because people rarely change their mind if told that an opinion they hold – or information they like – is false, even if it has been deliberately influenced by a hostile actor.
Like cyber attacks, influence operations have a definable set of means through which they seek to deliver their effect, and a range of general motives for which they might do so, from geopolitical effect to commercial advantage over a rival. Exposing those means and motives, and how they could be employed to do harm in a specific commercial context can help people identify the same motives and means in other contexts. While still debated, researchers have shown this ‘inoculation’ helping people detect and resist influence operations in controlled studies.
The broad impact from cyber security and health and safety training…
The workplace is one of the few environments in which large numbers of adults can be exposed to training. Accurate figures are not readily available, but if we assume that 50% of the US workforce is required to complete cyber awareness training at least annually, that would be nearly 100 million people being paid to improve their preparedness for a cyber attack every year in the US alone. That has a big impact on society. Those people take that awareness home to their personal lives, raising the level of their own cyber security and therefore that of their community. Phishing emails are now so widely understood that they are discussed across generations, and even joked about.
A similar ‘spillover effect’ has been observed in companies which have a significant focus on health and safety at work and track injuries sustained by their staff at home. The better the training at work, the fewer injuries at home.
…shows how influence operations training can raise societal defences
Most people know that images, audio and video can be manipulated, and that AI is accelerating and changing this in ways which are both exciting and perhaps concerning. But that does not mean that we understand how that can affect us and the information we receive. It does not mean we are aware of the motives of those behind influence operations, how they work, or what effect they can have.
When people are shown how manipulated content, synthetic engagement, false consensus, or covert narrative shaping can affect their working life, they are likely to carry it into their personal lives. They discuss it with colleagues, friends and family. They might ask different questions of the information in front of them: who produced this, why now, through what means, and to what end?
Over time, that can help raise cognitive resilience across society. Not by making every citizen an expert, but by making more people more alert to the motives, methods and effects of deliberate manipulation.
What, then, should we do?
We must bring business more squarely into the conversation about the risks of influence operations. The subject is most often discussed between academics, civil society, government officials, and politicians. This is important, but it’s not sufficient.
The private sector reaches hundreds of millions of people every year through mandatory workplace training. It provides the revenue that sustains the cognitive defence sector we will need. And it often has more direct influence on people’s lives than academia or government.
There is no silver bullet, but there are some interventions which could help:
Raise awareness – Training and information campaigns are an important early step. A business which understands how influence operations can affect its staff, partners, and profits is more likely to invest in addressing the issue.
Facilitate sharing experiences – Forums in which businesses can safely share their experiences of influence operations will be important. The parallel experience of being hacked has moved from one of shame which must be kept secret, to a community issue which is openly shared and discussed.
Address it through corporate governance – This is an issue which cuts across corporate silos, touching security, communications, geopolitical threat, HR, and legal functions. Assigning responsibility and accountability for it is too often missed or confused. Where it appears on risk registers, it is frequently left unaddressed.
Encourage the further development of the sector – We need a thriving counter-influence operations sector. That will require cooperation between government, business, academia, and civil society, as is the case for cyber.
The private sector can do more than protect itself. It can become the most effective vector through which society builds a resistance to AI-enabled influence operations.