Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Harmful Speech Detection by Language Models - Gender-Queer Dialect Bias

Association for Computing Machinery (ACM) via YouTube

Overview

Watch a 19-minute ACM conference presentation examining how language models demonstrate bias in detecting harmful speech when analyzing gender-queer dialects. Explore research findings from Rebecca Dorn, Lee Kezar, Fred Morstatter, and Kristina Lerman that reveal systematic biases in how AI systems process and flag potentially harmful content across different gender expressions and linguistic patterns.

Syllabus

Harmful Speech Detection by Language Models Exhibits Gender Queer Dialect Bias

Taught by

Association for Computing Machinery (ACM)

Reviews

Start your review of Harmful Speech Detection by Language Models - Gender-Queer Dialect Bias

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.