Emerging threats

We support organisations striving to build a trustworthy, safe online environment where users can engage authentically in their communities.
Cross-sector corporatesWe support international government organisations and NGOs working to provide infrastructure or improve the capabilities, security and resilience of their nation.
International programmes and developmentWe support commercial organisations operating in a digital world, seeking to protect their reputation and prevent business disruption caused by cyber attacks and compliance breaches.
UK government and public sectorWe support UK government organisations responsible for safeguarding critical infrastructure, preserving public trust, and maintaining national security.



The adoption of AI is driving organisations to reassess their operations and, in some cases, if they can replace staff headcount with technology. For PGI, however, AI adoption means identifying where the technology can improve our efficiency and ensure we remain a competitive supplier without compromising on the quality of our work.
AI tools reduce manual, repetitive tasks, enabling us to retrieve and correlate data at significantly increased speed and scale. This gives time back to our team to focus on high-value investigative work where human judgement and expertise are most important.
From both a business and delivery perspective, AI technology has clear advantages when used appropriately:
• Increased efficiency: Faster data retrieval and correlation.
• Enhanced productivity: Our team spends more time on their skilled trades rather than manual data gathering.
• Cost effectiveness: Improved efficiency can translate into reduced costs for clients.
However, it’s important that these benefits are balanced against inherent risks:
• Lack of transparency: There are risks in trusting AI outputs where the source is not entirely transparent.
• Over-reliance risk: The more accurate AI appears on the surface, the more likely it is to be trusted without quality reviews.
• Accountability challenges: Outputs must always be attributable and defensible.
At PGI, we challenge these risks by maintaining human validation over any AI-assisted work. AI is a tool to support the productivity of our teams, not to replace them. Just like spell checking, grammatical validation, or for the older readers, our friend, ‘Clippy'.
Our team takes a critical view of new technology while recognising the potential opportunities it presents them. Human oversight and validation of any AI outputs is an essential step in our process before it forms any part of delivery to our clients. Our experts apply professional judgement to review the quality of AI outputs and take full ownership of the final work they produce.
Some clients may explicitly request that AI tools are not used due to their own internal policies. We fully respect these requirements; however, we do highlight that restricting the use of such tools may impact efficiency and, in turn, increase delivery costs.
We believe that the responsible use of AI allows us to minimise risks while maximising value for our clients (as cliché as that sounds). Where appropriate, we use secure and private in-house AI solutions to ensure the highest levels of information security, while still benefiting from the efficiencies the technology provides. This includes the use of professional, enterprise-grade AI systems designed for secure environments, rather than consumer-grade free tools which carry a different risk profile.
Ultimately, our approach is to use AI in a controlled and transparent way to enhance our delivery capabilities while maintaining the high quality and security standards that our clients expect.
If you're using AI internally or as part or your external offering and want to make sure you're doing so safely and securely (and without sacrificing the quality of what you do), get in touch, because we would be happy to support you.

70% of all cyberattacks in 2024 involved critical infrastructure. That stark statistic from IBM’s X-Force 2025 Threat Intelligence Index captures the scale of the challenge facing Critical National Infrastructure (CNI) operators today.

Every business should have a Cyber Incident Response Plan (CIRP). Every business that handles sensitive data, operates under regulatory requirements, or wants to protect its reputation needs one.

The NCSC’s Cyber Essentials scheme is getting a substantial update in April 2026. Of course, the core principles will remain the same, but there are some practical elements that will change the reality of achieving or renewing your Cyber Essentials Plus certification.