Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore key challenges in building and evaluating trustworthy AI systems through this 80-minute lecture that begins with a foundational recap of human trust in AI. Learn what factors influence people's trust in reliable models before diving deep into critical issues in AI evaluation, including data shortcuts in benchmarking, the importance of independent test instances, maintaining basic system functionality, and ensuring robust evaluation methods. Examine the complexities of securing high-quality pretraining data and understand how these challenges impact the development of dependable AI systems that can earn justified user trust.
Syllabus
Human trust in AI recap
What factors enable people to trust trustworthy models?
Challenges of constructing valid benchmarks: Data shortcuts
Issue #2: Evaluations with independent test instances
Issue #3: Not ensuring we are not breaking the basics
Issue #4: Evaluations of robustness are not robust
Challenges of ensuring high-quality pretraining data
Taught by
UofU Data Science