DeepFakes and the US 2020 Elections

- Intelligence - Technology

11-02-2020


In brief

  • As ‘Fake News’ overshadowed the 2016 presidential election, so DeepFakes are likely to damage electoral integrity in 2020.
  • DeepFakes are a form of synthetic media wherein the creator manipulates existing footage to deceive the viewer.
  • DeepFakes will likely be deployed to exploit divisive issues and microtarget specific groups.
  • Attempts by social media firms to regulate the use of DeepFakes may come too late to prevent the media from disrupting and influencing the online information environment.

The purpose of DeepFakes

DeepFakes will be used to support and divide polarised voters around key narratives including abuse of power, gun control, immigration, corruption and economic issues. DeepFakes will create division around these issues as well as removing trust between the consumer and media. Doctored videos can show candidates making contradictory off-record remarks about the Trump impeachment trial, or Joe Biden commenting on his son’s links to Ukraine. Fake images can show a candidate associating with gun control advocates or white nationalists. Manipulated audio can play a recording of a candidate’s private phone conversations in which they express opinions on abortion counter to their campaign claims.

Critical seats and moments, particularly debates, are at risk of being targets too, as they have the potential to swing wider political opinion. Microtargeting, both of time and target will damage electoral integrity more than attacking an entire party’s campaign.

Types of DeepFake content

DeepFake video technology is rapidly approaching the point where it is advanced enough to create convincing videos with a figure doing or saying something they didn’t. DeepFake audio is even further developed, with fake soundbites extremely hard to distinguish from legitimate audio, especially if designed to sound like a phone call.

Lower resolution DeepFake videos – often referred to as ShallowFakes – can be equally impactful because audiences implicitly trust visual media. Though ShallowFakes are more easily disproven, a viewer consuming visual content rapidly via various social media timelines is impacted unwittingly. Content overload will therefore be key to the impact of DeepFakes on the 2020 election.

How DeepFakes spread

The reach of social media networks ensure that DeepFakes will be prolific in the online information environment. The speed at which polarising content often goes viral among in-groups of supporters is also facilitated by social media algorithms. Once a DeepFake – or indeed any other form of disinformation – goes viral, any subsequent retraction is too little too late.

With the 2020 US election approaching, social media companies have issued rules to prevent the spread of DeepFakes on their platforms. Twitter does not allow users to “deceptively share synthetic or manipulated media that are likely to cause harm.” Facebook has banned “edited or synthesized” content which is not parody or satire from its suite of platforms.

However, Terms of Service will not stop the spread of DeepFakes, with users able to hide behind claims of parody or a lack of real-world impact. The image-centric nature of Instagram and the difficulty in enforcing regulations mean it will be a key forum for dissemination of DeepFakes throughout the election cycle.

The effects

  • By discrediting political figures, DeepFakes erode trust in candidates.
  • By polarising and enraging audiences around divisive issues DeepFakes can lead to voter manipulation.
  • By sparking mass conversation about the falsity of video and audio evidence, DeepFakes erode trust in legitimate media and undermine the idea of genuine evidence. Political figures could exploit this to discredit real evidence and avoid accountability or transparency.

DeepFakes are going to become more prominent in all forms of media. With the technological detection capability of DeepFakes lagging behind their creation, social media firms will inevitably be engaged in a cat-and-mouse game trying to stop the weaponisation of manipulated media. As malign entities continue to exploit DeepFakes, deployment of the technology will continue to polarise an increasingly cynical society and erode trust in the political system.

Our Intelligence specialists provide corporate intelligence and geopolitical risk analysis to multinational corporations and governments worldwide. Our aim is to help our clients navigate the complex and uncertain global, 24/7 digital world, enabling them to understand and manage their risk exposure and ensure operational resilience. Contact us to discuss how we can help: riskanalysis@pgitl.com

Ready to get started? Speak to one of our experts.

If you have any questions about our services or would like to learn more about our consultants here at PGI, please get in touch with us and speak with one of the team, call us on +44 (0)845 600 4403 or email us at sales@pgitl.com

Get in touch

Want to find out more?