Hi, I’m Miriam!


I’m currently a Visiting Postdoc at the Digital Emotions Lab at Harvard Business School.

I develop computational approaches to analyze harmful content online, such as violent language or abusive behavior. I analyze large-scale text data with Natural Language Processing (NLP), identify patterns, and combine these with experimental studies to see how we can connect results from language models with real-world behavior.

My current projects focus on

  • understanding how people interact with information that attacks scientists online,
  • child safety on TikTok, and
  • validating online harm detection with psychological assessment (for instance, if we detect hate speech, do people actually feel bad when they read it?)

I was previously a postdoctoral researcher at LINK Lab at Northwestern University. I received my PhD from Technical University of Munich (Computational Social Science Lab). I hold three undergraduate degrees in Psychology, Political Science, and History, and a master’s degree in Criminology.

I have been a visiting researcher at University of Michigan, University of Cambridge, and the Auschwitz Institute for the Prevention of Mass Atrocities in New York City.

Updates

Mar, 2026
✈️ From one bean town to the next: I’ve left Chicago behind to start a visiting postdoc position the Digital Emotions Lab at Harvard Business School.

Jan, 2026
📺 Our paper on Just Another Hour on TikTok was published in Journal of Quantitative Description: Digital Media. Covered by a piece in The Economist.

Dec, 2025
🏆 I received the Dissertation Award of the Freunde der TUM e.V. of the Technical University of Munich for my dissertation on NLP for Violence Studies.

Dec, 2025
🗣️ I presented ongoing work on hostility against scientists at the 2025 MIT Polarization Workshop.

Nov, 2025
⚖️ It was a privilege to visit the Human Rights Center at UC Berkeley School of Law to discuss practical questions related to my work on online hate speech.

Selected Publications

See all publications


Talks

  • Schirmer, M. (2025, November). Natural Language Processing for Trauma Detection. Human Rights Center, UC Berkeley.

  • Schirmer, M. (2025, October). Natural Language Processing for Harm Detection and Mitigation. CCEW Online Speaker Series, University of the Bundeswehr Munich.

  • Schirmer, M. (2025, July). Understanding and Reducing the Psychological Impact of Online Harm. MilaNLP Seminar, Bocconi University.

  • Schirmer, M. (2025, April). Sharenting and Child Exposure on TikTok. Text-as-Data (TaDa) Speaker Series.

  • Schirmer, M. (2025, February). Natural Language Processing for Objectification Detection. Information Sciences Institute (ISI), University of Southern California.

About Me

If I were to visit your city, you’d find me exploring a local bookshop for my next book club pick, then settling into a cozy coffee shop, or joining a salsa or bachata social.

I’m one of the deputy chairs at Genocide Alert, a German human rights organization that advocates for the effective prevention and punishment of grave human rights violations such as genocide and crimes against humanity.

My work and studies have been supported by the German Academic Scholarship Foundation, the German Business Foundation, and the Bavarian Research Institute for Digital Transformation.