The threat of obvious fakes
The disinformation space surrounding conflicts—like Russia’s invasion of Ukraine—is often quite complex, with a variety of actors attempting to introduce disinformation that furthers their narratives. So far, there has been much effort to understand and combat information operations that introduce convincing disinformation through complex webs of state-funded organisations; ranging from government backed think-tanks to networks of websites covertly owned by the state. While it is certainly important to uncover these operations, something that is much less understood is the near universal presence of low-quality, clearly fake ‘news organisations’ rampant across every platform.
These organisations are often deceptively simple, with iterative account names and post/video titles that manipulate algorithms to drive as much traffic as possible. They often make great efforts to hide any identifying information and share one another incessantly. But, most importantly, they exist to churn out as much disinformation as possible, as quickly as possible. Inputting common questions surrounding conflicts into search engines will often return results from these organisations due to their ubiquity, even if the algorithm favours traditional news outlets. For people who are skeptical of those traditional outlets, these organisations are often the first results they see. And as many people only read headlines or short descriptions, the content doesn’t end up mattering anyway.
Ultimately, these low-quality organisations exist to confuse, not to convince. This distinction is subtle but very important, as it’s much easier to do the former than the latter. Creating confusion in an information space can turn an inciting incident into a routine one. It can neuter the passion of adversaries and can even turn passive viewers or readers into tacit supporters if they consume enough of this information to legitimise it internally. These organisations take advantage of preconceptions and heuristics to manipulate thinking over time, and the simple nature of the task allows them to do it at scale.
In this way, these organisations are often more dangerous than the flashy state-run efforts to generate believable disinformation; yet it is often the case that they are overlooked because of their simplicity and their clearly false (from a moderator’s perspective) content. More research is desperately needed to understand the full impact and scale of these organisations, and platforms must begin to treat this type of disinformation as seriously as they do others.
PGI’s Social Media Intelligence Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.