Explore a neurosymbolic framework called NeSyFOLD-G that enhances the interpretability of Convolutional Neural Networks (CNNs) in image classification tasks. Learn about the innovative kernel-grouping algorithm that reduces the size of generated rule-sets, improving overall interpretability. Discover how the framework uses cosine-similarity between feature maps to group similar kernels, binarizes kernel group outputs, and employs the FOLD-SE-M algorithm to generate symbolic rule-sets. Understand the process of mapping predicates to human-understandable concepts using semantic segmentation masks. Gain insights into replacing CNN layers with rule-sets to create the NeSy-G model and using the s(CASP) system for prediction justification. Delve into a novel algorithm for labeling predicates with corresponding semantic concepts, bridging the gap between connectionist knowledge and symbolic representation in deep learning.
Using Logic Programming and Kernel-Grouping for Improving Interpretability of Convolutional Neural Networks
ACM SIGPLAN via YouTube
Overview
Syllabus
[PADL'24] Using Logic Programming and Kernel-Grouping for Improving Interpretability of Co...
Taught by
ACM SIGPLAN