Our expertise
Our services
Who we serve
Insights
About us
Digital Threat Digest Insights Careers Let's talk

Where PGI stands on AI adoption

Like many other organisations, PGI recognises that rather than resist AI, we need to embrace it responsibly and adapt our approach to a rapidly evolving market.

Keith Buzzard, Chief Technology Officer's photo
Keith Buzzard, Chief Technology Officer
City

The adoption of AI is driving organisations to reassess their operations and, in some cases, if they can replace staff headcount with technology. For PGI, however, AI adoption means identifying where the technology can improve our efficiency and ensure we remain a competitive supplier without compromising on the quality of our work.

AI tools reduce manual, repetitive tasks, enabling us to retrieve and correlate data at significantly increased speed and scale. This gives time back to our team to focus on high-value investigative work where human judgement and expertise are most important.

The benefits and the risks 

From both a business and delivery perspective, AI technology has clear advantages when used appropriately:

•   Increased efficiency: Faster data retrieval and correlation.
•   Enhanced productivity: Our team spends more time on their skilled trades rather than manual data gathering.
•   Cost effectiveness: Improved efficiency can translate into reduced costs for clients.

However, it’s important that these benefits are balanced against inherent risks:

•   Lack of transparency: There are risks in trusting AI outputs where the source is not entirely transparent.
•   Over-reliance risk: The more accurate AI appears on the surface, the more likely it is to be trusted without quality reviews.
•   Accountability challenges: Outputs must always be attributable and defensible. 

Enhancing our work without sacrificing quality

At PGI, we challenge these risks by maintaining human validation over any AI-assisted work. AI is a tool to support the productivity of our teams, not to replace them. Just like spell checking, grammatical validation, or for the older readers, our friend, ‘Clippy'.

Our team takes a critical view of new technology while recognising the potential opportunities it presents them. Human oversight and validation of any AI outputs is an essential step in our process before it forms any part of delivery to our clients. Our experts apply professional judgement to review the quality of AI outputs and take full ownership of the final work they produce. 

Some clients may explicitly request that AI tools are not used due to their own internal policies. We fully respect these requirements; however, we do highlight that restricting the use of such tools may impact efficiency and, in turn, increase delivery costs.

We believe that the responsible use of AI allows us to minimise risks while maximising value for our clients (as cliché as that sounds). Where appropriate, we use secure and private in-house AI solutions to ensure the highest levels of information security, while still benefiting from the efficiencies the technology provides. This includes the use of professional, enterprise-grade AI systems designed for secure environments, rather than consumer-grade free tools which carry a different risk profile.

Ultimately, our approach is to use AI in a controlled and transparent way to enhance our delivery capabilities while maintaining the high quality and security standards that our clients expect. 

It's your turn

If you're using AI internally or as part or your external offering and want to make sure you're doing so safely and securely (and without sacrificing the quality of what you do), get in touch, because we would be happy to support you.