Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive lecture on explainable AI and its societal impact delivered by Dr. Debasis Ganguly from the University of Glasgow. Delve into the paradigm shift from feature-driven to data-driven AI learning, examining how modern AI systems process information differently from human perception. Master key explanation methodologies including LIME, L2X, and Shapley algorithms while understanding their practical applications in search systems. Learn about the critical role of explainable AI in developing fair and trustworthy next-generation systems through topics like multiclass transmission, complex reasoning tasks, and attention weights. Discover how knowledge base linking, counterfeiture estimations, and noise contrast estimation contribute to building more transparent AI systems. Gain insights into the characteristics and outputs of explanation models, information intervals, and local fidelity concepts that shape the future of AI development. Perfect for computer engineering and data science professionals seeking to understand the intersection of AI transparency and societal responsibility.
Syllabus
Introduction
Talk Structure
Why is this required
Trust Issues
Expression
Visualisation
Expansion Units
Knowledge Base Linking
Counterfeiture estimations
Characteristics of an explanation model
Output of an explanation model
Algorithm Line 1
Multiclass transmission
Globalist connections
Complex reasoning tasks
Vanguard problems
Attention weights
Informational tasks
Complex models
Information interval
Sakaya 2019
Sampling
Output
Weights of Atoms
Noise Contrast to Estimation
Equation
Societal Implications
Conclusion
Questions
Local Fidelity
Taught by
IIIA Hub