Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Towards Falsifiable Interpretability Research in Machine Learning - Lecture

Bolei Zhou via YouTube

Overview

Explore a comprehensive tutorial lecture on falsifiable interpretability research in machine learning for computer vision. Delve into key concepts including saliency, input invariants, model parameter randomization, and the impact of silencing on human understanding. Examine case studies on individual neurons, activation maximization, and selective units. Learn about building stronger hypotheses and gain valuable insights into the challenges and potential solutions in interpretable machine learning. Discover techniques for regularizing selectivity in generative models and understand the importance of developing robust, testable hypotheses in this field.

Syllabus

Introduction
Outline
Obstacles
Misdirection of Saliency
What is Saliency
Saliency axioms
Input invariants
Model parameter randomization
Does silencing help humans
Takeaways
Case Study 2
Individual neurons
Activation maximization
Populations
Selective units
Ablating selective units
Posthoc studies
Regularizing selectivity
Ingenerative models
Summary
Building better hypothesis hypotheses
Building a stronger hypothesis
Key takeaways

Taught by

Bolei Zhou

Reviews

Start your review of Towards Falsifiable Interpretability Research in Machine Learning - Lecture

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.