Explore a Stanford seminar on leveraging human input for robust AI systems. Delve into Daniel S. Brown's research on incorporating human feedback to enhance AI safety and performance. Learn about maintaining uncertainty over human intent, generating risk-averse behaviors, and efficiently querying for additional human input. Discover approaches for developing AI systems that can accurately interpret and respond to human guidance. Gain insights into the long-term vision for safe and robust AI, including learning from multi-modal human input, interpretable robustness, and human-in-the-loop machine learning techniques that extend beyond reward function uncertainty.
Overview
Syllabus
Stanford Seminar - Leveraging Human Input to Enable Robust AI Systems, Daniel S. Brown
Taught by
Stanford Online