The revolution might be livestreamed
From any singular perspective it can sometimes be difficult to appreciate the sheer scale of the internet. Most of us log on for a few hours each day and browse the same 10-15 websites, the same couple of social media sites, and then rinse and repeat. This is partially why online radicalisation can be so difficult to understand when attacks happen, as the media coverage struggles to explain to an audience who spend 99.9% of their time bouncing between The Guardian and Amazon how it could have happened online. It can lead to reductionist explanations, and the attaching of a form of digital mysticism to some more niche spaces online, like Discord. Except Discord isn’t some niche at this point, it has 150 million active monthly users, because the internet is absolutely massive.
There were a lot of hot takes about the Buffalo shooting the day after it happened, and a few too many lukewarm ones. Many focused on the incident itself, but the interest from an emerging digital threat perspective came from the coordination and livestreaming angles.
Every day there are 500 million tweets. A tweet is a known quantity, the majority will be text only, and the length is limited. Automated detection and content systems can (theoretically) do a lot with text moderation. Yet a significant quantity of inauthentic Twitter profiles wreaked havoc around the US 2016 election cycle. The evolution from text to media poses issues, yet there still exist fairly robust image and video moderation capabilities. However, doctored and partisan video content wreaked havoc during the Covid-19 pandemic in promoting anti-vaxx ideologies. The next evolution is to live content. Baked Alaska, Alex Jones, and various other right-wing friends livestreamed their efforts to breach the Capitol on 6 January 2021 in the US. Real-time detection of potentially violative content being streamed live is the next level of sophistication, and the direction in which most platforms or services are headed. You can go live on TikTok, Instagram, YouTube, Twitch, there’s even still a buried option to do so on Twitter still.
Accompanying the threats and challenges of livestreaming comes coordination. The storming of the Capitol was coordinated openly online, using a variety of communications services, and was clearly going to be a significant threat from as early as November 2020. Gone are the days of coded knocks at doors and chalk marks on park benches, someone on one side of the world can coordinate malicious real-world activity on the other entirely online. And if they’re discovered, they can burn down the evidence instantly and rebuild that system in as long as it takes to register a new server. And for every singular serious threat actor who follows through, there are 5,000+ fanboys shitposting white supremacist threats into the ether.
The scale of the problem has never been greater. Livestreaming is clearly the commercial future of online content, yet it poses the most difficulties in moderating or investigating. Coordination no longer leaves a real world paper trail. This combination makes proactive threat detection so much more difficult and resource-intensive, but also frustrating, as the reactive investigation of incidents like the Buffalo attack show that the warning signs were there – it’s just a question of scale.
PGI’s Social Media Intelligence Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.