Overview
Explore approaches to fairness and bias mitigation in Natural Language Processing in this 31-minute PyCon US talk. Delve into the importance of evaluating fairness and mitigating biases in large pre-trained language models like GPT and BERT, which are widely used in natural language understanding and generation applications. Understand how these models, trained on human-generated data from the web, can inherit and amplify human biases. Discover various methods for detecting and mitigating biases, and learn about available tools to incorporate into your models to ensure fairness. Gain valuable insights into the critical field of fairness and bias research in NLP, essential for developing more equitable and responsible AI systems.
Syllabus
Talks - Angana Borah: Approaches to Fairness and Bias Mitigation in Natural Language Processing
Taught by
PyCon US