Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Logic for Explainable AI - Tutorial

UCLA Automated Reasoning Group via YouTube

Overview

Dive into a comprehensive tutorial on Logic for Explainable AI presented by the UCLA Automated Reasoning Group. Explore three key dimensions of understanding decisions made by learned classifiers: characterizing necessary and sufficient conditions, identifying maximal irrelevant aspects, and determining minimal perturbations for alternate decisions. Learn about a semantical and computational theory of explainability based on recent developments in symbolic logic, applicable to non-symbolic classifiers like Bayesian networks, decision trees, random forests, and certain neural networks. Discover how to represent classifiers using tractable circuits and class formulas, understand discrete logic versus Boolean logic, and delve into concepts such as sufficient, complete, and necessary reasons for decisions. Examine logical operators for computing instance abstractions, explore beyond simple explanations, and investigate general reasons for decisions. Gain insights into targeting new decisions, selection semantics of complete and general reasons, and compiling classifiers into class formulas from various machine learning models. This expanded version of a live tutorial from the 2023 ACM/IEEE Symposium on Logic in Computer Science offers nearly two hours of in-depth content, complete with introduction, conclusion, and a detailed breakdown of topics.

Syllabus

Introduction
From numeric to symbolic classifiers
Representing classifiers using tractable circuits
Representing classifiers using class formulas
Discrete logic vs Boolean logic
The sufficient reasons for decisions: why a decision was made? aka abductive explanations, PI-explanations
The complete reasons for decisions: instance abstraction
The necessary reasons for decisions: how to change a decision? aka contrastive explanations, counterfactual explanations
Terminology: PI-explanations, abductive explanations, contrastive explanations, counterfactual explanations
A logical operator for computing instance abstractions complete reasons
The first theory of explanation: A summary
Beyond simple explanations: A key insight
The general reasons for decisions: instance abstraction
Complete vs general reasons two notions of instance abstraction
The general sufficient and general necessary reasons for decisions
The second theory of explanation: A summary
Targeting a new decision
Selection semantics of complete and general reasons instance abstractions
Compiling classifiers into class formulas from decision trees, random forests, Bayesian networks, and Binary neural networks
Conclusion

Taught by

UCLA Automated Reasoning Group

Reviews

Start your review of Logic for Explainable AI - Tutorial

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.