Detect
Protect
Build
Insights
About
Digital Threat Digest Insights Careers Let's talk

AI Regulation: The dangers of the fine print - Digital Threat Digest

PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.

Double-circle-designs-part419.png?auto=compress%2cformat&fit=crop&fm=webp&h=0&ixlib=php-3.1

The tech world is currently dominated by headlines around OpenAI’s dramatic leadership challenges that have taken place over the last week. From a business perspective, it makes sense. No matter how unorthodox the business structure, people will certainly be willing to fight for a company worth a reported $80 billion. But there was a secondary layer to the leadership fight: some on the board were worried that the company’s recent advances could ultimately lead to the destruction of humanity… again.

It's not the first time OpenAI has warned of the imminent destruction of our species at the hands of their own product. Indeed, this is the kind of self-indulgent language we have now come to expect from the AI development industry. It’s a brilliant piece of marketing. Dozens of follow-up articles appear after each portent of doom, followed by AI developers tragically pleading with regulators to hurry before it’s too late. They get to have the intriguing doomsday product that everyone wants to see, while also being the saviours of humanity; it’s a win-win.

Of course, while all this is playing out with breathless coverage, the real decisions are being made that will actually impact the future of humanity. World leaders from all major countries gathered at the UN earlier this week to decide how to regulate (or not regulate) autonomous weapons platforms. This includes AI drones with the capacity to identify and kill a target without human intervention, as well as other weapons like loitering munitions, drone swarms, and other cutting-edge or near-future technologies.

The results of this discussion broke down along expected lines: large, wealthy nations wanted to keep language as vague as possible, while small, less wealthy nations (where these weapons would likely be used) advocated for restraint and regulation. In a moment of striking diplomatic unity, the normally conflicting great powers of the world were able to find common ground in their pursuit for an effectively unregulated autonomous weapons industry.

That’s why I’m so cynical about the drama surrounding AI startups and media personalities. I just can’t seem to get worked up about OpenAI’s “Project Q*”, which can now successfully perform primary school math problems with ease, when we have much more serious and relevant decisions being made for technology that exists right now.

Ultimately, as is most always the case, the real threat comes not from the dramatic moon-shot, but from the slow, relatively boring process of defining legal frameworks and the tortured wording of fine print.


More about Protection Group International's Digital Investigations

Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.

Disclaimer: Protection Group International does not endorse any of the linked content.