Business Continuity Management Systems

Content moderation, a key tool in the Trust & Safety arsenal, is designed on a set of policy principles, but in practice it often evolves as a reaction to incoming harms. Content removal, account suspensions and bans are essential tools, but they’re increasingly seen as incomplete. They address the symptoms of harm, not the systems that enable harmful behaviour to thrive.
Enter ‘pro-social design’; a growing approach within the Trust & Safety space that blends behavioural science with intentional product design to nudge users toward constructive, respectful, and inclusive participation. It’s a shift away from relying solely on punishment, toward encouragement and reward for positive social behaviours.
Pro-social design combines ‘nudge theory’ with ‘positive reinforcement’ to shape how people behave online. Rather than policing after harm is done, it embeds cues, prompts, and incentives into the user experience to promote healthier interactions. Think ‘earning kudos’ for helping others; social recognition amongst your peer group for positive actions (including reporting of offensive activity), being prompted to pause before posting something controversial, or seeing your contributions elevated when they align with community norms.
This design strategy has seen early success in gaming, but has positive application in the world’s largest online platforms.
Gaming environments were among the first to experiment with pro-social mechanics. Riot Games introduced an ‘honour system’ in League of Legends that rewarded teamwork and positive play, while Overwatch’s endorsement system led to measurable reductions in toxicity.
Mainstream platforms have adapted these principles to fit their communities, for example:
TikTok flags potentially harmful comments in real-time and has creator safety features, including removing harmful content and promoting positive interactions.
These are more than UX tweaks, they’re signals highlighting that positive behaviour matters.
The benefits of pro-social design level up when combined with proactive digital intelligence. With access to an in-depth understanding of harms networks and actor behaviours, not just content but behaviours, such as posting patterns, target platforms, and escalation triggers—platforms can anticipate harm and adapt the experience in real-time.
Digital intelligence, including upstream and off-platform insights, recognises the signals that precede negative interactions. Armed with this knowledge, platforms can build smarter pro-social systems that:
While not all online abuse can be predicted, digital intelligence provides insights into the more egregious harms that can occur (e.g., radicalisation, child exploitation, gender-based violence) - enabling a pro-social response. The proactive approach contributes not only to scalability, but also aligns with regulatory mandates like the EU’s Digital Services Act, or the UK’s Online Safety Act.
A popular live-streaming platform faced rising incidents of brigading, where groups coordinate to flood a streamer’s chat with abuse, often triggered by fringe forum discussions. This harassment was driving creators away and undermining community trust. Traditional moderation methods, like keyword filters and reactive bans, were too slow. The platform needed a proactive approach to prevent harm and foster positive engagement.
The solution combined pro-social design with proactive digital intelligence. A threat intelligence team flagged upcoming campaigns by analysing off-platform chatter, while on-platform behavioural analysis detected suspicious activity patterns. Streamers at risk were prompted to enable enhanced moderation features, such as chat delay and vetted follower-only chat. New or high-risk accounts received pro-social onboarding messages, encouraging respectful behaviour. Regular viewers were prompted to welcome newcomers, fostering a constructive environment.
If a user began to escalate, pre-post nudges encouraged rephrasing of harmful messages. Positive engagement by nearby users was rewarded with chat badges and shout-outs. Post-incident analysis identified effective interventions, improving future responses. Streamers received summaries showing abuse prevented and increased pro-social engagement, reinforcing trust.
As a result, harassment incidents decreased significantly, creating a safer, more welcoming environment. The platform gained valuable behavioural insights while fostering community-driven moderation.
Pro-social design doesn’t replace content moderation, it augments. It creates digital spaces that make it easier to do the right thing, and harder to do harm. It rewards empathy, elevates civility, and aligns platform design with community values. As online spaces continue to take on the role of our digital public squares, the stakes couldn’t be higher. The future of trust and safety isn’t just about what platforms take down, it’s about what they choose to build up.
PGI’s Digital Investigation Analysts work with some of the world’s best-known brands to help them make online spaces safer. If you want to take your safety and compliance from reactive into proactive, let’s chat.
As organisations strengthen their technical defences, cyber criminals are adapting their tactics by targeting other digital vulnerabilities, like the availability of Personally Identifiable Information online.
In the rapidly evolving digital landscape of 2025, every organisation faces a huge range of challenges that extend far beyond traditional cyber threats.
We began this year knowing it was going to be a significant year for digital risk and digital safety. An unprecedented number of elections, brand new online safety legislation under implementation – all taking place against a backdrop of both existing and new conflict and war.