Algorithmic Genocide

Silencing Marginalized Voices Through AI Moderation


In an increasingly digital world, social media platforms have become vital spaces for self-expression, community building, and activism, especially for marginalized groups. However, these platforms also wield immense power to control what is seen and heard, often through the use of algorithms and AI-driven moderation. TikTok, one of the most popular platforms globally, has been engaging in a practice that Claudia Alick has coined as "algorithmic genocide," where key terms that marginalized people use to describe their experiences or identities are marked as dangerous language. This practice effectively silences these communities, erasing them from public discourse and damaging their ability to exist both online and in broader society.

The Mechanics of Algorithmic Genocide

Algorithmic genocide is a term that encapsulates the harm caused when AI moderation systems on platforms like TikTok target and suppress the language and experiences of marginalized communities. TikTok’s AI is programmed to flag certain words or phrases as potentially harmful, often without understanding the context in which they are used. For example, if a content creator who is part of the LGBTQIA2S+ community uses the word "queer"—a term that has been reclaimed and embraced by many within the community—the platform may interpret it as offensive or inappropriate. This can lead to the creator losing access to features like live streaming, significantly limiting their ability to engage with their audience.

Similarly, disabled content creators face severe consequences when discussing their lived experiences. If a creator reads an essay by a disabled writer that includes terms like "retard," not as an endorsement but as a recounting of derogatory language used against them, TikTok’s AI may mark the video as "hate speech." The platform's algorithms, lacking the nuance to differentiate between context and intent, label the creator as engaging in harmful behavior. This false labeling not only damages the creator's reputation but also has psychological consequences, reinforcing the message that their identities and experiences are unwelcome and unsafe.

The Psychological and Social Impact

The impact of algorithmic genocide extends far beyond individual videos being taken down or creators losing access to certain features. The psychological toll on marginalized creators who are repeatedly silenced by these AI systems is profound. Being flagged for hate speech when discussing one’s own experiences or identity is a form of gaslighting, where the platform itself becomes an agent of harm, distorting reality and forcing creators to question the validity of their own narratives.

Moreover, this practice has broader social implications. By silencing marginalized voices, TikTok effectively erases these communities from the public sphere. When LGBTQIA2S+ or disabled creators are unable to speak openly about their identities, challenges, and triumphs, they are denied the opportunity to shape public discourse and influence cultural understanding. This erasure is not just a digital phenomenon; it has real-world consequences. When marginalized groups are silenced online, they lose a crucial platform for advocacy, education, and connection, which can lead to further marginalization and isolation in society at large.

Stochastic Terrorism and the Justification of Silencing

Algorithmic genocide on TikTok can be likened to a form of stochastic terrorism—a concept that refers to the use of mass communication to incite random acts of violence or harm against a particular group. While TikTok’s actions may not incite direct physical violence, they do contribute to an environment where the silencing of marginalized voices is normalized and justified. By labeling discussions of queer identity or disability as "hate speech," TikTok’s AI moderates these communities out of existence, justifying their exclusion under the guise of maintaining a "safe" platform.

This practice is particularly damaging because it forces creators to engage in "algo-speak," a form of coded language designed to evade AI moderation. Creators must contort their speech, avoiding certain words or phrases, in order to communicate their ideas without being penalized. This not only stifles authentic expression but also places an additional cognitive and emotional burden on creators, who must constantly navigate the ever-changing landscape of acceptable language on the platform.

A Personal Example: Calling Up Justice

The severity of TikTok’s algorithmic genocide is exemplified by the experience of Calling Up Justice, an organization dedicated to justice-themed performances, consultations, and digital spaces. While live streaming a reading of an essay by a disabled creator, Calling Up Justice was abruptly shut down for allegedly engaging in "hate speech." Despite the content being an important reflection on the lived experiences of disabled individuals, TikTok’s AI failed to recognize the context, instead penalizing the creators for discussing derogatory language used against them.

This unjust action had significant repercussions. Not only was Calling Up Justice's live stream interrupted, but the platform also prevented them from continuing their Accessible Virtual Pride event, an important space for community and celebration. Despite submitting six appeals, TikTok doubled down on its AI’s flawed decision, refusing to reinstate the content or allow the event to continue. This incident highlights the devastating impact of algorithmic genocide on marginalized creators who are working to build inclusive, supportive spaces online.

The Road to Real Genocide

The erasure of marginalized voices online is not merely a digital inconvenience; it is the first step toward real-world genocide. When platforms like TikTok silence and disappear the experiences and identities of marginalized people, they contribute to a culture that devalues and dehumanizes these communities. The inability to speak about one’s own life and struggles is equivalent to being denied the right to exist. In the long term, this dehumanization lays the groundwork for further marginalization, discrimination, and violence.


TikTok’s algorithmic genocide is a stark reminder that the tools we use to moderate and manage online spaces must be developed with a deep understanding of context, identity, and power dynamics. When AI is used without such understanding, it becomes a blunt instrument that harms those it is supposed to protect. For marginalized communities, the consequences are dire: their voices are silenced, their identities are erased, and their existence is threatened.

Conclusion

TikTok’s practice of algorithmic genocide, represents a significant threat to the visibility and survival of marginalized communities online. By using AI moderation to target and suppress the language and experiences of LGBTQIA2S+ and disabled creators, TikTok is engaging in a form of digital erasure that has profound psychological and social consequences. This practice forces creators to engage in algo-speak, distorting their ability to communicate authentically and pushing them further into the margins. The silencing of marginalized voices on TikTok is not just an issue of content moderation—it is an existential threat that must be recognized and addressed. As we continue to navigate the complexities of digital platforms, it is crucial that we demand more equitable and just practices that protect and uplift the voices of those who have been historically silenced.