Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

Trusted and Responsible AI - Explainability, Adversarial Attacks, Bias, and Fairness

Linux Foundation via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical aspects of Trusted and Responsible AI in this 37-minute conference talk by Dr. Vamsi Mohan Vandrangi. Delve into the principles of responsible AI, focusing on explainability (XAI), adversarial AI/ML, bias, and fairness in AI systems. Learn why XAI is crucial for building trust in machine learning algorithms and understand various adversarial attacks, including poisoning, evasion, and model stealing. Discover defense strategies such as adversarial training, switching models, and generalized models. Examine methods for identifying and fixing biases in AI and machine learning algorithms, ensuring fairness in AI-driven decision-making processes. Gain insights into the ethical considerations and practical approaches for implementing responsible AI in real-world business scenarios.

Syllabus

OPEN SOURCE SUMMIT
Speaker Profile
Introduction
The Principles of responsible Al
Why XAI is important?
Adversarial Machine Learning Defenses
Adversarial training
Switching models
Generalised models
Poisoning attacks
Evasion attacks
Model stealing
Methods of combating attacks
Fixing biases in Al and machine learning algorithms
Conclusion

Taught by

Linux Foundation

Reviews

Start your review of Trusted and Responsible AI - Explainability, Adversarial Attacks, Bias, and Fairness

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.