— Guest speaker Maria Thomas on behavioral science and online harassment
Quick recap. Guest speaker Maria Thomas (digital investigator with the RISE Information Security Foundation, background in behavioral science) presented on the behavioral science behind online harassment — why people pile on, how anonymity shapes group behavior, what dopamine and serotonin have to do with moral policing, and how the landscape has escalated from 1990s MUDs through Gamergate into today's NVE sextortion groups and AI-generated deepfakes. The recording covered the full presentation; Q&A happened after Shawn stopped recording and isn't included here.
Show 9 discussion topics
Why this topic for a cloud-security meeting
Neil introduced the session by noting that while online harassment isn't cloud security on its face, the behavioral-science framing applies directly to community dynamics — including CSOH's own. He pointed out that some participants have probably experienced online harassment, some may have participated in it, and that the mechanisms Maria describes are exactly what CSOH tries to invert to create a positive environment. Maria added that she didn't originate the research — she curated it into a presentation so practitioners can discuss mitigations.
What digital harassment actually looks like
Maria defined digital harassment as using information and communication technologies to repeatedly harm another person, and catalogued the common forms: direct harassment, cyberstalking, doxing, impersonation, identity abuse, pile-ons and coordinated harassment, and image-based abuse. These are intersectional — cyberstalking can lead to offline stalking; doxing has led to real-world violence and even murder (Mariel Franco in Brazil, Gauri Lankesh in India). Target groups are predictable: women, people of color, LGBTQ people, and anyone with visibility and authority. Amnesty International's Troll Patrol found that 1 in 7 tweets to women politicians and journalists in a 2018 UK/US study were abusive — one abusive tweet every 30 seconds — and women of color were 34% more likely to be targeted than white women.
Why crowds behave worse than individuals
Maria traced mob-behavior theory from Gustave Le Bon's 1895 "The Crowd" (anonymity → lowered personal responsibility, invincibility, contagion, suggestibility) into modern refinements. The older deindividuation theory held that anonymity caused a loss of self and random behavior. The **SIDE model** (Social Identity Model of Deindividuation Effects) corrected this: anonymity doesn't erase identity, it shifts the person's salient identity to the group. A 1979 study gave test subjects anonymizing costumes — hate-group-style costumes produced aggressive behavior, nurse uniforms produced caring behavior. Anonymity amplifies whichever group norm is dominant.
Online disinhibition: five effects that lower the filter
Dr. John Suler's 2004 online disinhibition theory adds a second layer. Maria walked through five effects: - **Dissociative anonymity** — the online persona is compartmentalized from real-world self. - **Physical invisibility** — no body language, no immediate consequences. - **Message asynchronicity** — feels like putting messages out, not interacting with a person. - **Solipsistic introjection** — you unconsciously assign a voice and face to the person you're talking to, so the exchange starts to feel like a play you're writing to yourself. - **Dissociative imagination** — some people see their online self as a fictional character, which frees them to act well outside their real-life norms. Each effect adds permission. Stacked, they explain how people end up in a racist Telegram channel or, more benignly, a support group they'd never join in person.
Dopamine, serotonin, and why moral pile-ons are addictive
Maria walked through the neurochemistry. Serotonin is the calming, stable, mood-regulating neurotransmitter — boosted by slow, focused activity. Low serotonin correlates with impulsive aggression. Dopamine is the reward neurotransmitter, hardwired as a survival mechanism. Social media companies have engineered likes and shares as dopamine drips, and research shows that **moral policing and punishment** specifically light up the reward circuit. The cycle is self-reinforcing: low serotonin → seeking dopamine → dopamine from aggression → more aggression → tolerance → needing more. As of January 2026 there are 2,000+ social media lawsuits pending in the Northern District of California accusing platforms of engineering dopamine addiction.
A brief history, 1980s to Gamergate
Online harassment traces back to 1980s MUDs (where players typed "spam" to drown out discussion, giving us the modern word). 1998's Drudge Report outing of Monica Lewinsky was an early pile-on. 2003 saw 4chan's founding; 4chan raids demonstrated that real-world harm could be coordinated online. The early 2010s produced a wave of cyberbullying-driven youth suicides (Amanda Todd, Rehtaeh Parsons, Tyler Clementi, and others). **2014's Gamergate** is the origin point of modern networked harassment — coordinated doxing, threats, and impersonation campaigns against Brianna Wu, Zoe Quinn, and Anita Sarkeesian, complete with blacklists, email templates, and phone scripts for pressuring advertisers. Gamergate was so destructive its own platform's founder stepped down citing it. On the global scale, Facebook admitted inadequate action during the 2017-2018 Rohingya genocide in Myanmar, where algorithmic amplification of hate speech contributed to violence.
NVE groups and the Sadistic Harm Radicalization Funnel
Nihilistic Violent Extremism (NVE) groups — also called Sadistic Online Exploitation groups — are youth-gang-style online networks (764, Cult, MKY, Harm Nation) engaged in sextortion, CSAM distribution, and coercing children into self-harm, violence against pets, and in some cases suicide. They operate across Telegram, Discord, and video games; shutting them down is whack-a-mole. Researcher Alex Slotnick of DevSec coined the "Sadistic Harm Radicalization Funnel" to describe how members escalate: **socialization** (normalization) → **voyeurism** (passive consumption, gateway material) → **participation** (active abuse, starting with minor real-world harm) → **skilled individual abuser** (specialized roles like swatter or doxer) → **ringleader**. Bradley Cadenhead, 764's founder, has been arrested; the group remains active.
AI-enabled abuse is a step-change
Maria flagged "nudifier" apps that animate real people's images into sex acts — advertised on Meta products in June 2025 and only partially removed. In December 2025–January 2026, X's Grok produced roughly 3 million sexualized images in 11 days, 23,000 of which appeared to depict children. A February 2026 UNICEF study of over a million children in 11 countries found 1 in 25 have had their images turned into sexually explicit deepfakes in the past year. AI has collapsed the skill barrier that used to keep this kind of abuse rare.
Where we go from here
The overall direction is bad — IC3 reports incidents skyrocketing, and cyberbullying.org shows US teen victimization rising from 17% to 33% since 2016, offending rates from 6% to 16%. Service providers are under-resourced, laws are patchwork, and resources skew reactive rather than preventative because the victim volume demands it. Useful legal precedents exist — Coco's Law in Ireland criminalizes cyberbullying, Australia's 2025 under-16 social-media ban places enforcement on platforms — and RISE maintains a bibliography of community and family resources linked from their site. Maria closed by pointing out that preventative work on the aggressor side is still largely missing; most current programs focus on victims and on lobbying platforms for design-level changes.