Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

Mitigating Bias in Models with SHAP and Fairlearn

Linux Foundation via YouTube

Overview

Explore techniques for addressing bias in machine learning models through a comprehensive conference talk by Sean Owen from Databricks. Dive into the application of SHAP (SHapley Additive exPlanations) and Fairlearn, two powerful tools for identifying and mitigating bias in AI systems. Learn how these methods can enhance model interpretability, promote fairness, and improve overall model performance. Gain valuable insights into ethical AI practices and discover practical strategies for building more equitable and transparent machine learning solutions.

Syllabus

Mitigating Bias in Models with SHAP and Fairlearn - Sean Owen, Databricks

Taught by

Linux Foundation

Reviews

Start your review of Mitigating Bias in Models with SHAP and Fairlearn

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.