Detect
Protect
Build
Insights
About
Digital Threat Digest Insights Careers Let's talk

Butlerian Jihad - Digital Threat Digest

PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.

Bird.png?auto=compress%2cformat&fit=crop&fm=webp&h=0&ixlib=php-3.1

The thing that interests me the most about the Dune universe is its depiction of a world which has banned, destroyed, and evolved beyond technology that can think for humans. The Butlerian Jihad is a ~100-year process in which computers and conscious robots are wiped out, in accordance with the commandment ‘thou shalt not make a machine in the likeness of a human mind’. The universe retains plenty of technical capability, but it is restricted to machine engineering in the absence of computing.

Removing technology in the form of conscious machines forces Dune to deal with human-centric social issues from an entirely human perspective. So why don’t we take a similar approach, as we begin to grapple with the human impact of the AI revolution? Back in our actual universe, Prime Mentat Rishi Sunak has sought to position the UK as the global leader in AI safety regulation, focused purely on the potential economic benefit. We seem to have moved from being unable to load Covid-19 passports on our phones when it rains too heavily near an NHS data centre to being able to debate heuristic patterns of ethical decision making with the Deliveroo chatbot in about seven weeks. At no point have we taken a break to pay anything more than lip service to the possible human-centric impact.

Most assessments of impact unfortunately fall into two buckets – the end of days fearmongers and the tech panacea believers. It is lazy to devolve the risk vs reward debate of technological capability into the primordial dystopian/utopian dualism, as—despite Black Mirror’s best efforts—there are a finite number of doomsaying stories we can tell. History shows us that every element of tech, even that which has genuinely been developed with the most altruistic and human-centric of objectives, inevitably ends up being weaponised in some weird, sinister, dystopian fashion. Develop an app to turn your phone into a shortwave radio so that first responders can communicate during natural disasters? Sorry, off-grid militants in remote jungles are now using it to coordinate attacks on military outposts. Build a heavy load bearing drone to speed transport of donated organs? Sorry, now it’s a mortar-carrying octocopter. 3D printing will surely decentralise engineering capability and allow ordinary folk to easily build replacement broken parts. Sorry, now extremists are printing untraceable firearms. But if we understand these potential outcomes from the beginning, we can proactively think about mitigation.

The intent doesn’t matter, the outcome matters. The original intent behind social media was – at face value – always a positive. And while polarisation and disinformation have certainly been worsened by social media, their roots far precede the digital world.

So, with the case of AI, before having another moral panic about the introduction of a new technology, and before coming up with a three-word slogan that vaguely suggests how we’ll legislate it, can we please focus on the human-centric outcomes? Understanding how technological advances such as AI will impact poverty, unemployment, and discrimination should precede the race to legislate. Recognition algorithms are already demonstrably full of racial bias – maybe let’s learn how to avoid human-developed bias before we weaponise AI by training it on an internet full of human-developed racially biased data.

To return to the Duniverse, we love art and literature that tries to answer the question of what will happen in the distant future, because we honestly have no idea. We don’t know if we’ll end up with a Butlerian Jihad or a Machine Revolt. Should we embrace our new AI overlords proactively, or should we begin taking baseball bats to the server stacks?

Whichever it ends up being, at the very least we shouldn’t continue to let economic and political motivations drive our social policies, or it won’t be a positive outcome.


More about Protection Group International's Digital Investigations

Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.

Disclaimer: Protection Group International does not endorse any of the linked content.