Tobias Redington, Intelligence Consultant
Unregulated social media – in brief
- Stricter regulations on large social media platforms are driving some extremists to smaller, unregulated networks.
- Many of these small platforms self-promote as unregulated havens of ‘free speech’.
- Their image as enclaves of online freedom is deceptive because associated hosts and registrars still influence what content is allowed on the platforms.
The purpose of unregulated social media
Social media was once an unregulated land, in which users faced few repercussions for posts, allowing its exploitation by extremists. In recent years, however, the tolerant approach of social media giants, such as Facebook and Twitter, has shifted to more proactive regulation following incidents of election interference, live-streamed killings and extremist recruitment. Ultimately, companies realized that the exploitation of their platforms damaged both society and business. And though the manipulation of social media still occurs, regulation articulates companies’ stance against platform exploitation and provides grounds for penalties.
As a result of this perceived restriction of online liberty, smaller platforms have cropped up to ‘fill the gap’, self-promoting as havens of free speech to increase membership.
To some, these networks are simply an another means of keeping in touch with friends and family. To others, they provide a space to express political opinion away from government imposition; these alternative platforms offer a place to freely post extremist content and network with other like-minded individuals.
The effects of unregulated social media
The exodus of extremists from large social media companies offers certain benefits. Some toxic content moves from large platforms like Facebook and Twitter, and therefore away from a large unsuspecting audience who are potentially vulnerable to extremist propaganda.
However, when there is a concentration of extremists on one platform, it can facilitate their organisation and networking. This put those individuals using the platform innocently at greater risk of exposure to harmful content.
The platforms offer an accessible middle ground between the social media giants and the inaccessible corners of the dark net. The combination of accessibility and nonregulation facilitates their ability to solidify existing extremist networks and promote their ideology to a new audience.
Exploitation of unregulated social media
Let’s look an example of just one of these networks.
Launched in 2017, Gab demonstrates the appeal of unregulated social media platforms to extremists. The service encourages users to #SpeakFreely and its founder Andrew Torba openly describes his stance against ‘the nanny-state, Big Brother-esque forum or model of the internet’. Gab’s Terms of Service underscore their commitment to free speech, though prohibit illegal pornography, incitement to violence, and threats which violate US law. Users include:
- Far-right figures—such as Richard Spencer and Tommy Robinson—who have been banned from other platforms have taken their followings there.
- Individual extremists take to the platform to share content without the fear of regulation they would experience on Facebook or Twitter. In 2018, Robert Bowers launched an anti-Semitic rage on Gab before killing 11 at a Pittsburgh synagogue.
- Extremist groups, such as Atomwaffen, use the platform to organize, share hateful memes and recruit new members.
Illusions of unregulated social media
Yet, the promise of a haven for ‘free speech’ is an illusion.
Websites exist in networks, not a vacuum as the image of limitless freedom suggests. Website domains are bought from registrars and accredited by the multinational organisation ICANN (Internet Corporation for Assigned Names and Numbers). Hosts provide storage for websites. Apps are offered through off-platform stores. External companies provide services to the platforms.
This dynamic resulted in a series of regulatory incidents for Gab which undermined its commitment to ‘free speech’:
- In September 2017, after receiving a complaint, Gab’s domain registrar AsiaRegistry gave Gab 48 hours to remove a post by Andrew Anglin, founder of the neo-Nazi site Daily Stormer, mocking Heather Heyer, a victim of the Charlottesville attack, violating the registrar’s abuse policy. Gab complied.
- Apple refused to offer the Gab app on the AppStore, and the Google store removed its once offered app.
- In August 2018, Microsoft threatened to cut off Gab’s access to its Azure hosting service if they did not remove two antisemitic posts from Patrick Little, former US Senate candidate expelled from the Republican Party for antisemitism, because they violated their Service Terms.
- PayPal canceled Gab’s account which supported their Pro membership, reporting that ‘when a site is allowing the perpetuation of hate, violence or discriminatory intolerance, we take immediate and decisive action.’
What does this tell us about social media regulation?
- Social media regulation depends on more than internal Terms of Service.
- External companies are willing to regulate social media content for societal and commercial reasons.
- The regulation of social media has become an accepted norm within businesses.
Today, social media looks very different from the unregulated space from years ago. A paradigm shift has occurred following years of incidents which revealed the harmful potential of social media manipulation. Reasonable regulation is now understood as a norm to prevent exploitation of platforms. Despite the efforts of some, even unregulated social media are not havens for hate.
PGI’s Social Media Intelligence team work with both public and private sector entities to help them understand how social media can affect their business. From high level assessments of the risks of disinformation to electoral integrity in central Africa to deep dives into specific state-sponsored activity in eastern Europe we have applied our in-house capability globally. Contact us to talk about your requirements: email@example.com or +44 (0) 845 600 4403.