Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Gender Shades - Intersectional Accuracy Disparities in Commercial Gender Classification

Association for Computing Machinery (ACM) via YouTube

Overview

Watch a thought-provoking conference talk from FAT* 2018 where Joy Buolamwini presents groundbreaking research on intersectional accuracy disparities in commercial gender classification systems. Explore the motivations behind the study, understand the benchmarking process, and delve into the evaluation of gender classification accuracy across different skin types and genders. Learn about the responses from major tech companies like Microsoft, Face Plus, and IBM to the findings. Gain insights into the importance of intersectionality in AI, the dangers of biased data, and the ethical implications of using such technology. Engage with key takeaways and participate in a Q&A session addressing critical questions about the Fitzpatrick scale and the broader implications of this research for the field of artificial intelligence and society at large.

Syllabus

Introduction
Motivation
Background
Gender Classification
Benchmarks
Labeling
Benchmark Limitations
Overall Accuracy
Accuracy by Gender
Accuracy on Skin Type
Intersectional Evaluation for Gender Classification
Microsoft
Face Plus
IBM
Companies Response
Microsoft Response
IBM Response
Key takeaways
Intersectionality matters
The dangers of supremely white data
How is this technology used
Is this a good thing
How this technology is used
Quick question
Question
Question Fitzpatrick
Conclusion

Taught by

ACM FAccT Conference

Reviews

Start your review of Gender Shades - Intersectional Accuracy Disparities in Commercial Gender Classification

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.