Emerging threats

We support organisations striving to build a trustworthy, safe online environment where users can engage authentically in their communities.
Cross-sector corporatesWe support international government organisations and NGOs working to provide infrastructure or improve the capabilities, security and resilience of their nation.
International programmes and developmentWe support commercial organisations operating in a digital world, seeking to protect their reputation and prevent business disruption caused by cyber attacks and compliance breaches.
UK government and public sectorWe support UK government organisations responsible for safeguarding critical infrastructure, preserving public trust, and maintaining national security.



On 02 December, a 7.6 magnitude earthquake struck the Philippines; and almost immediately after, my X (formerly Twitter) feed was filled with posts about it. While most of these were genuine, it wasn't hard to find the occasional piece of mis/disinformation seeded among them. For example, multiple videos misattributed existing footage from Japan and Taiwan to the current earthquake, thereby misleading users on the severity of its impact and inducing panic. This isn't novel behaviour: similar posts have been circulated across the Filipino information environment in previous years, such as during the July 2022 Abra earthquake and the September 2022 Mindanao earthquakes, and in other countries as well.
I have previously written about crisis-focused mis/disinformation and the vulnerability it breeds online. At face value, this may appear to be a less-egregious digital threat due to its lack of targeted focus on protected groups and communities. However, I'd argue that it is equally (if not more) dangerous than those that are, primarily because of the unique panic, susceptibility and instantaneous damage created during crisis situations. Over the last five years, for example, there have been multiple documented instances of this disinformation exploiting victims' trauma and impeding relief operations, including in countries such as India, Mali and Turkey. The speed of their impact is notable here, lending urgency to the need for having established counter-disinformation measures in place.
It is also important to understand just who is seeding this mis/disinformation, and why. Over time, research has narrowed it down to two key grassroots threat actors: those interested in virality and boosting their engagement levels on social media via clickbait, and those aiming for financial gain (such as through fake fundraising initiatives). A February 2023 article published by SOAS University of London also highlights the potential for foreign influence operations to exploit crisis situations – particularly those with a vested interest in destabilising and maligning the impacted country. The mis/disinformation is then amplified by average internet users, who are too impacted by the situation at hand to do the required due diligence. This is more common than we think: on a personal level, I, too, have amplified unverified content relating to crises in the past, as have many people I know.
In short, crisis-driven mis/disinformation is too often brushed under the carpet as a 'small-scale' digital threat, to the point that it has now become cyclical. We've grown accustomed to it, and often almost expect it – an expectation that is met nine times out of ten, and which can have a severe detrimental impact. It is now time for governments to properly focus on this threat, enabling top-down digital resilience and media literacy through targeted legislation and coordination with independent open-source tools (take Ushahidi, for example), social media platforms and crisis-mapping initiatives.
More about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.

PGI has officially been recognised as an Assured Cyber Advisor by the UK’s most trusted cyber security body, the National Cyber Security Centre (NCSC).

“The question is not whether AI will influence international peace and security, but how we will shape that influence.

With the continued rise in cyber attacks—particularly those targeting supply chains—there’s been growing pressure within the industry for organisations to demonstrate information security to clients, partners and regulators.