Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

University of Central Florida

Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

University of Central Florida via YouTube

Overview

Explore the principles of Explainable AI in this 32-minute lecture from the University of Central Florida's CAP6412 course. Delve into the challenges of interpreting deep learning models, examining alternative explanation techniques beyond Taylor Decomposition. Learn about the four key properties of effective explanation methods and understand the Layer-wise Relevance Propagation (LRP) rules for deep rectifier networks. Discover how to implement LRP efficiently and its connection to Deep Taylor Decomposition. Analyze the properties of explanations and explore rule choices using the VGG-16 network. Gain valuable insights into the importance of explainability in AI and its implications for various applications.

Syllabus

Introduction
Explainable Machine Learning
Problems with Taylor Decomposition
Alternative Explanation Techniques
Four Properties of Good Explanation Techniques
LRP Rules for Deep Rectifier Networks
LRP Rules: LRP-O
Implementing LRP Efficiently
LRP as a Deep Taylor Decomposition (ii)
Properties of Explanations (ii)
Rule Choices with VGG-16
Conclusion
Against
Questions?

Taught by

UCF CRCV

Reviews

Start your review of Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.