Hi, I’m Miriam!
I’m currently a Visiting Postdoc at the Digital Emotions Lab at Harvard Business School.
I develop computational approaches to analyze harmful content online, such as violent language or abusive behavior. I analyze large-scale text data with Natural Language Processing (NLP), identify patterns, and combine these with experimental studies to see how we can connect results from language models with real-world behavior.
My current projects focus on
- understanding how people interact with information that attacks scientists online,
- child safety on TikTok, and
- validating online harm detection with psychological assessment (for instance, if we detect hate speech, do people actually feel bad when they read it?)
I was previously a postdoctoral researcher at LINK Lab at Northwestern University. I received my PhD from Technical University of Munich (Computational Social Science Lab). I hold three undergraduate degrees in Psychology, Political Science, and History, and a master’s degree in Criminology.
I have been a visiting researcher at University of Michigan, University of Cambridge, and the Auschwitz Institute for the Prevention of Mass Atrocities in New York City.
Updates
Mar, 2026
✈️ From one bean town to the next: I’ve left Chicago behind to start a visiting postdoc position the Digital Emotions Lab at Harvard Business School.
Jan, 2026
📺 Our paper on Just Another Hour on TikTok was published in Journal of Quantitative Description: Digital Media. Covered by a piece in The Economist.
Dec, 2025
🏆 I received the Dissertation Award of the Freunde der TUM e.V. of the Technical University of Munich for my dissertation on NLP for Violence Studies.
Dec, 2025
🗣️ I presented ongoing work on hostility against scientists at the 2025 MIT Polarization Workshop.
Nov, 2025
⚖️ It was a privilege to visit the Human Rights Center at UC Berkeley School of Law to discuss practical questions related to my work on online hate speech.
Selected Publications
Just Another Hour on TikTok: Reverse-Engineering Unique Identifiers to Obtain a Complete Slice of TikTok
Steel, B., Schirmer, M., Ruths, D., & Pfeffer, J. (2026)
Journal of Quantitative Description: Digital Media, 6.Detecting Child Objectification on Social Media: Challenges in Language Modeling
Schirmer, M., Voggenreiter, A., Pfeffer, J., & Horvát, E.-Á. (2025)
Proceedings of the 9th Workshop on Online Abuse and Harms (WOAH), pages 396–412, Vienna, Austria. Association for Computational Linguistics.Disparities by design: Toward a research agenda that links science misinformation and socioeconomic marginalization in the age of AI
Schirmer, M., Walter, N., & Horvát, E.-Á. (2025)
Harvard Kennedy School Misinformation Review.Large Language Models and the Challenge of Analyzing Discriminatory Discourse: Human-AI Synergy in Researching Hate Speech on Social Media
Breazu, P., Schirmer, M., Hu, S., & Kastos, N. (2025)
Journal of Multicultural Discourses, 1-19.The Language of Trauma: Modeling Traumatic Event Descriptions Across Domains with Explainable AI
Schirmer, M.; Leemann, L., Kasneci, G., Pfeffer, J., & Jurgens, D. (2024)
Findings of the Association for Computational Linguistics: EMNLP 2024, 13224–13242, Miami, Florida, USA. Association for Computational Linguistics.Investigating the Increase of Violent Speech in Incel Communities with Human-Guided GPT-4 Prompt Iteration
Matter, D.*, Schirmer, M.*, Grinberg, N., & Pfeffer, J. (2024)
Frontiers in Social Psychology, 2, 1383152 (* equal contribution).
Talks
Schirmer, M. (2025, November). Natural Language Processing for Trauma Detection. Human Rights Center, UC Berkeley.
Schirmer, M. (2025, October). Natural Language Processing for Harm Detection and Mitigation. CCEW Online Speaker Series, University of the Bundeswehr Munich.
Schirmer, M. (2025, July). Understanding and Reducing the Psychological Impact of Online Harm. MilaNLP Seminar, Bocconi University.
Schirmer, M. (2025, April). Sharenting and Child Exposure on TikTok. Text-as-Data (TaDa) Speaker Series.
Schirmer, M. (2025, February). Natural Language Processing for Objectification Detection. Information Sciences Institute (ISI), University of Southern California.
About Me
If I were to visit your city, you’d find me exploring a local bookshop for my next book club pick, then settling into a cozy coffee shop, or joining a salsa or bachata social.
I’m one of the deputy chairs at Genocide Alert, a German human rights organization that advocates for the effective prevention and punishment of grave human rights violations such as genocide and crimes against humanity.
My work and studies have been supported by the German Academic Scholarship Foundation, the German Business Foundation, and the Bavarian Research Institute for Digital Transformation.
