Completed
Sub-Question Importance-aware Network Tuning
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Explaining Model Decisions and Fixing Them Through Human Feedback
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Interpretability in different stages of Al evolution
- 3 Approaches for visual explanations
- 4 Visualize any decision
- 5 Visualizing Image Captioning models
- 6 Visualizing Visual Question Answering models
- 7 Analyzing Failure modes
- 8 Grad-CAM for predicting patient outcomes
- 9 Extensions to Multi-modal Transformer based Architectures
- 10 Desirable properties of Visual Explanations
- 11 Equalizer
- 12 Biases in Vision and Language models
- 13 Human Importance-aware Network Tuning (HINT)
- 14 Contrastive Self-Supervised Learning (SSL)
- 15 Why SSL methods fail to generalize to arbitrary images?
- 16 Does improved SSL grounding transfer to downstream tasks?
- 17 CAST makes models resilient to background changes
- 18 VQA for visually impaired users
- 19 Sub-Question Importance-aware Network Tuning
- 20 Explaining Model Decisions and Fixing them via Human Feedback
- 21 Grad-CAM for multi-modal transformers