
In this episode, we explore a particularly controversial topic: content moderation.
What is content moderation? What role does AI play in this context? How do platforms decide which content is allowed? We discuss the challenges and ethical dilemmas surrounding content moderation, as well as the impact these decisions could have on free speech and public discourse.
Sophie Butz and Cassandra Audibert are joined by David Hartmann, researcher at the Weizenbaum Institute to discuss these questions. With his expertise in computer science and philosophy, David Hartmann talks about biases in algorithmic systems and the advantages and risks of using AI for content moderation.
For further information about the topic and his work feel free to look at the following links:
Hartmann, D., Oueslati, A., Munzert, S., Staufer, D., Pohlmann, L., & Heuer, H. (2025). Lost in moderation: How commercial content moderation APIs over- and under-moderate group-targeted hate speech and linguistic variations. Paper accepted for presentation at the CHI 2025 Conference, April 2025.
Hartmann, D., Oueslati, A., & Staufer, D. (2024). Watching the watchers: A comparative fairness audit of cloud-based content moderation services. Paper accepted for presentation at the EWAF 2024 Conference, February 2025.
Data workers worldwide report on their workplaces: https://data-workers.org/#Inquiries