Explaining Model Decisions and Fixing Them Through Human Feedback

Explaining Model Decisions and Fixing Them Through Human Feedback

Stanford MedAI via YouTube Direct link

Intro

1 of 21

1 of 21

Intro

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Explaining Model Decisions and Fixing Them Through Human Feedback

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Interpretability in different stages of Al evolution
  3. 3 Approaches for visual explanations
  4. 4 Visualize any decision
  5. 5 Visualizing Image Captioning models
  6. 6 Visualizing Visual Question Answering models
  7. 7 Analyzing Failure modes
  8. 8 Grad-CAM for predicting patient outcomes
  9. 9 Extensions to Multi-modal Transformer based Architectures
  10. 10 Desirable properties of Visual Explanations
  11. 11 Equalizer
  12. 12 Biases in Vision and Language models
  13. 13 Human Importance-aware Network Tuning (HINT)
  14. 14 Contrastive Self-Supervised Learning (SSL)
  15. 15 Why SSL methods fail to generalize to arbitrary images?
  16. 16 Does improved SSL grounding transfer to downstream tasks?
  17. 17 CAST makes models resilient to background changes
  18. 18 VQA for visually impaired users
  19. 19 Sub-Question Importance-aware Network Tuning
  20. 20 Explaining Model Decisions and Fixing them via Human Feedback
  21. 21 Grad-CAM for multi-modal transformers

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.