Overview
Explore innovative approaches to medical image classification using gaze data as a supervision source in this Stanford University lecture. Delve into the potential of leveraging clinicians' eye movements during image analysis to train deep learning models, reducing reliance on expensive hand-labeled datasets. Learn about two proposed methods: Gaze-WS for scenarios without task labels and Gaze-MTL for multi-task learning when labels are available. Discover how these techniques can be applied to tasks such as classifying chest X-rays for pneumothorax and brain MRI slices for metastasis. Gain insights into the speaker's research on developing sustainable and reliable ML models for healthcare applications, and understand the broader implications for AI in medicine. The lecture covers topics including observational signals, problem settings, hidden stratifications causing model failures, and the Domino evaluation framework for subgroup robustness.
Syllabus
Intro
What are observational signals?
Problem settings for observational signals
Gaze-MTL
Hidden stratifications cause model failures
Investigating other observational signals
Domino: An evaluation framework for subgroup robustness
Taught by
Stanford MedAI