Overview
Dive deep into Graph Convolutional Networks (GCN) with this comprehensive 50-minute video lecture. Explore the most cited paper in GNN literature, covering all aspects of GCN from three different perspectives: spectral, Weisfeiler-Lehman, and Message Passing Neural Networks. Learn about Graph Laplacian regularization methods, in-depth GCN methodology, vectorized form explanations, and the spectral methods motivating GCNs. Visualize GCN hidden features using t-SNE, understand semi-supervised learning processes, and examine graph embedding methods and results. Compare GCN variations, analyze speed benchmarks and limitations, and investigate the Weisfeiler-Lehman perspective, contrasting GCN with Graph Isomorphism Networks (GIN). Gain insights into Graph Attention Networks (GAT) and explore the consequences of the Weisfeiler-Lehman test on GNN architectures and depth.
Syllabus
Intro to GCNs
Graph Laplacian regularization methods
GCN method in-depth explanation
Vectorized form explanation
Spectral methods the motivation behind GCNs
Visualizing GCN hidden features t-SNE
Explanation of semi-supervised learning process
Graph embedding methods, results
Different variations of GCN
Speed benchmarking & limitations
Weisfeiler-Lehman perspective GCN vs GIN
GAT perspective, consequences of WL
GNN depth
Taught by
Aleksa Gordić - The AI Epiphany