Cyber Security
Investigations
Capacity Building
Insights
About
Digital Threat Digest Insights Careers Let's talk

On India's first 'AI election' - Digital Threat Digest

PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.

Double circle designs couple5

India is currently in the middle of its 44-day general election, which is divided into seven phases and ends on 1 June. The largest election in the world is both a democratic linchpin and a minefield of digital and real-world threats. In 2019, for example, the election was marred with misinformation, hate speech, and political violence, with over 700 instances of violence and eleven deaths in the states of West Bengal, Kashmir and Manipur alone. These issues persist today, amplified and exploited for political gain by a range of authentic and inauthentic threat actors – all of which operate within a tumultuous information environment, at a far larger scale than anywhere else in the world (keep in mind that the country has 900 million internet users). And that's exactly why India has been my primary focus for the better part of two years.

In January 2024, the World Economic Forum ranked India the most at-risk country for mis/disinformation. This warning has rung true in the first three phases of the election, with the information environment saturated with, for example: Islamophobic conspiracies such as 'Love Jihad' and 'election Jihad' in Muslim-dominated constituencies; misinformation surrounding electronic voting machines (EVMs); and generally partisan manipulated images and videos aimed at deceiving the electorate.

These threats are exacerbated by the country’s growing technological advancement. Where in 2014 we saw the use of 3D hologram technology to broadcast political speeches, in 2024 we are seeing politicians' AI-generated avatars addressing voters by name. Not everyone sees this as a threat, and some even tout it as the bright future of political campaigning. But I'm more concerned with the ethics of manipulating reality, particularly to a voter base that is so easily misled; in what circumstances, if at all, does a manipulated image or video become 'okay' to use?

Take the personalised AI-generated avatars for example. Or the resurrection of dead politicians endorsing new candidates – all promoted by leading political parties in the country (including the Bharatiya Janata Party, the Indian National Congress, and the Dravida Munnetra Kazhagam). Just because there is no nefarious intent—they are promoting their own party, not falsely discrediting a rival one—does that make it an acceptable campaigning tool? I would argue that these examples are equally as harmful as more malicious uses of AI-generated campaigning material; such as the heavily criticised doctored videos of Bollywood celebrities denouncing Prime Minister Narendra Modi, and videos making it seem like BJP leader Amit Shah said that his party would abolish religious reservations in Telangana – because at the end of the day, all these examples still manipulate reality, casting aspersions on the knowledge of the electorate and the fairness of the vote.

Being at the forefront of these emerging threats and home to a chronically online populace, India acts as a microcosm for the rest of the world. There's so much we can learn from the country and its first 'AI election', both in terms of democratic values and threat behaviours/pressure points (particularly given the number of countries going to polls this year). And that isn't something we should forget.

Subscribe to the Digital Threat Digest.


More about Protection Group International's Digital Investigations

Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.

Disclaimer: Protection Group International does not endorse any of the linked content.