AI threats follow the rules, too - Digital Threat Digest
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
The hyperfocus on AI continues to generate an incredible number of predictions around its future uses, for good and for ill. Even just focusing on digital risks, AI could either drown the internet in fake posts, or become sentient and destroy humanity. It has a wide range of potential futures, and it’s understandable that people are worried about how this new technology may impact them.
The good news is AI digital threats are still beholden to the same rules and constraints that any other digital threats are. When a ‘bad actor’ is choosing whether to engage maliciously on the internet, they must weigh costs and potential benefits. For an individual, creating a long text post with a new conspiracy might not be worth the time if they know it won’t be shared widely. For a larger actor, spending money on equipment and manpower to make spam videos for ad revenue might not be worth it if nobody watches them.
And that’s the key to understanding where AI digital threats might present themselves. AI can write a long post faster than a human, but it can’t guarantee that post will go viral. AI can save money on video production, but it can’t guarantee anyone will see it. And while AI will certainly increase the rate at which this material will be posted, that doesn’t necessarily help. Over time, platforms and regulators will have increasingly powerful defensive tools to detect and shut down AI generated content. Additionally, the public will become increasingly aware and resilient to this content, just as it did with bots in the 2010s.
Where AI is likely to make the most immediate difference, therefore, is where that cost-benefit calculation already favours benefit. So we’re more likely to see AI enhance pre-existing operations, or change how they’re presented, rather than develop completely new ones right away.
What does that mean to the average internet user? Barring a sentient AI uprising, in aggregate, you’re likely to experience more of the same. The phishing emails may be written slightly better, the spam videos may come at a faster clip, but the fundamental equations don’t change. Of course, this will not be a universal truth, and we will see new operations develop, especially in the fringes of the internet. But many of the lessons we have learned that apply to generic digital threats are also applicable to AI ones, and that is a good thing.
More about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.
The most serious argument I’ve ever had with a very good friend came when they challenged me to a game of Crash Team Racing, a spinoff from the Crash Bandicoot universe in which you race characters in go-karts.
On 02 December, a 7. 6 magnitude earthquake struck the Philippines; and almost immediately after, my X (formerly Twitter) feed was filled with posts about it.
An IT Health Check is an annual assessment required for public sector organisations using the government’s Public Services Network (PSN).