Harmful Speech Detection by Language Models - Gender-Queer Dialect Bias
Association for Computing Machinery (ACM) via YouTube
Overview
Watch a 19-minute ACM conference presentation examining how language models demonstrate bias in detecting harmful speech when analyzing gender-queer dialects. Explore research findings from Rebecca Dorn, Lee Kezar, Fred Morstatter, and Kristina Lerman that reveal systematic biases in how AI systems process and flag potentially harmful content across different gender expressions and linguistic patterns.
Syllabus
Harmful Speech Detection by Language Models Exhibits Gender Queer Dialect Bias
Taught by
Association for Computing Machinery (ACM)