Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Quantifying and Reducing Gender Stereotypes in Word Embeddings

Association for Computing Machinery (ACM) via YouTube

Overview

Explore gender stereotypes in word embeddings and learn techniques to quantify and reduce bias in this hands-on tutorial from the FAT* 2018 conference. Dive into the basics of word embedding learning and applications, then gain practical experience writing programs to display and measure gender stereotypes in these widely-used natural language processing tools. Discover methods to mitigate bias and create fairer algorithmic decision-making processes. Work with iPython notebooks to explore real-world examples and complete exercises that reinforce concepts of fairness in machine learning and natural language processing.

Syllabus

FAT* 2018 Hands-on Tutorial: Quantifying and Reducing Gender Stereotypes in Word Embeddings

Taught by

ACM FAccT Conference

Reviews

Start your review of Quantifying and Reducing Gender Stereotypes in Word Embeddings

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.