Emerging threats

We support organisations striving to build a trustworthy, safe online environment where users can engage authentically in their communities.
Cross-sector corporatesWe support international government organisations and NGOs working to provide infrastructure or improve the capabilities, security and resilience of their nation.
International programmes and developmentWe support commercial organisations operating in a digital world, seeking to protect their reputation and prevent business disruption caused by cyber attacks and compliance breaches.
UK government and public sectorWe support UK government organisations responsible for safeguarding critical infrastructure, preserving public trust, and maintaining national security.



To most people, online influence operations involve competing ideologies battling it out in the public sphere. They recognise this conflict is often lopsided, with certain ideas being better attuned to our base human instincts and algorithmic virality. This isn’t necessarily wrong, but I think it only tells a small part of the story.
If you break down these operations and campaigns to their constituent pieces, you find they are a function of three constraints: objectives, resources, and time. To justify its existence, an operation must first have a concrete and measurable goal, involving clear metrics of success or failure. Next, operators must consider their available resources, from bots and software tools to hardware and staff. Lastly, they must account for time. Operations have a window of viability before the sociopolitical context evolves and renders their campaign irrelevant. In this way, a person or entity engaging in these propaganda campaigns functions just like any business or similar organisation, who are also bound by these fundamental constraints.
A recent article by Wired really struck me as a great example of these three constraints in action. The author shows how businesses emerged around US intelligence agencies to connect them to the incredible repository of ad data collected by apps and digital platforms. The article describes how terrifyingly granular this information can get, and how it could be used by organisations such as the CIA in nearly all their missions. As it is technically public, there is also no reason why it can’t also be exploited by Chinese or Russian authorities.
Nevertheless, the development and use of “ADINT” operates according to the framework I’ve outlined. Private companies connect intelligence agencies to online ad data markets, and that in turn helps them broaden their scope for any potential future influence operation. It increases the information resources at their disposal and allows them to broaden or deepen their existing objectives. Furthermore, the detail and real-time feedback of this data prolongs the time their campaigns are effective.
Ultimately, these tools will lose some of their effectiveness as the public becomes aware of their existence. Those who feel at risk will improve their operational security measures or even learn how to manipulate the data to set traps or red herrings. My key takeaway is that when trying to understand the behaviour of those running information operations, do so through this prism of objective, resources, and time. Because that’s the basic framework behind how actors are making decisions on what objectives to pursue and how.
More about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.

PGI has officially been recognised as an Assured Cyber Advisor by the UK’s most trusted cyber security body, the National Cyber Security Centre (NCSC).

“The question is not whether AI will influence international peace and security, but how we will shape that influence.

With the continued rise in cyber attacks—particularly those targeting supply chains—there’s been growing pressure within the industry for organisations to demonstrate information security to clients, partners and regulators.