Detect
Protect
Build
Insights
About
Digital Threat Digest Insights Careers Let's talk

Oppenheimer vs Hinton? - Digital Threat Digest

PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.

Double-circle-designsagain9.png?auto=compress%2cformat&fit=crop&fm=webp&h=0&ixlib=php-3.1

I’m sure that I don’t need to provide context for this digest, but yesterday I found out that one of my colleagues had no idea what the Barbie movie was, so just in case you missed it as well - two of the biggest films of the year, Oppenheimer (a biopic about Robert Oppenheimer, the creation and use of the atomic bomb) and Barbie (a hard-hitting piece of cinematic mastery about the complex lives of plastic dolls) were released last Friday. The simultaneous release day has long been dubbed “Barbenheimer” and has created some golden meme content across social media. Unfortunately, I only had time to catch Oppenheimer so rather than framing a whimsical piece around “Come on Barbie, let’s go party”, we’re going to talk about why people are now comparing the atomic bomb to artificial intelligence and why they’re both right and wrong to do so.

Let’s start with where people are right… To be honest, the comparisons are extremely easy to make - for every scene in Oppenheimer, there is a point to be made for AI. Both the development of the atomic bomb and the creation of artificial intelligence represent the pinnacles of human innovation and scientific discovery, pushing the boundaries of everything we thought was possible. During the Manhattan Project, many scientists banded together to try and stop the government from dropping what they had created on Hiroshima and Nagasaki. A few months ago, AI scientists published the now-infamous open letter, calling for help from the government to slow down and regulate AI. After the two bombs killed over 200,000 Japanese citizens, Oppenheimer used his influence as the ‘father of the atom bomb’ to try and cull the global arms race that transpired. On 1 May 2023, Geoffery Hinton, the godfather of AI, left Google to sound the alarm on AI outperforming humans. And so forth.

I don’t think that the similarities between the atomic bomb and AI are merely coincidental either. Rather, they reveal a deeper truth about human progress and ethical considerations. Indeed, the spirit of exploration that drives human curiosity and thus drives innovation has always sat at the forefront of everything. Typically, it is not until we experience the unforeseen consequences of scientific advancement that we analyse the ethics behind what we have created and understand the trade-offs between progress and disruption or in some cases, destruction.

But AI is not going to set off mutually-assured destruction, or an uncontrollable nuclear reaction that engulfs the world in flames – to compare the two in terms of catastrophic potential is, I think, rather dismissive of the hundreds of thousands of deaths that occurred in 1945. As I’ve previously written, giving too much time and power to the questions of what if detracts our attention and resources away from what is. That is to say that by allowing ourselves to go down a rabbit hole of an unlikely artificially generated human apocalypse we lose sight of the threats that genuinely need our attention – AI-enabled hacktivist campaigns, influence operations, disinformation campaigns, and AI-driven cybercriminal networks. These things aren’t going to end the world, but they are going to increase the number of cybercrime victims, reduce the threshold of entry for malicious threat actors, and perhaps even aid in the influence of a major election.

I’m not saying we should dismiss the comparisons entirely – it is important that we learn from the past and approach AI innovation with humility, empathy, and responsibility. It is also right that the development of AI should be guided by principles that mitigate the risk of sparking another arms race, Cold War (or any type of war), or possible apocalyptic event, as unlikely as it will ever be. In short, we should be more concerned about the problems we are already facing. And, if you need to spin out your post-Oppenheimer exponential crisis, just focus it on the actual ending of the film which, don’t worry, I won’t spoil for you.


More about Protection Group International's Digital Investigations

Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.

Disclaimer: Protection Group International does not endorse any of the linked content.