Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

HOT: Higher-Order Dynamic Graph Representation Learning with Efficient Transformers

Scalable Parallel Computing Lab, SPCL @ ETH Zurich via YouTube

Overview

Explore dynamic graph representation learning with efficient transformers in this conference talk from the Second Learning on Graphs Conference (LoG'23). Dive into the HOT model, which enhances link prediction by leveraging higher-order graph structures. Discover how k-hop neighbors and subgraphs are encoded into the attention matrix of transformers to improve accuracy. Learn about the challenges of increased memory pressure and the innovative solutions using hierarchical attention matrices. Examine the model's architecture, including encoding higher-order structures, patching, alignment, concatenation, and the block recurrent transformer. Compare HOT's performance against other dynamic graph representation learning schemes and understand its potential applications in various dynamic graph learning workloads.

Syllabus

Introduction: Link Prediction
Introduction: Higher-Order Graph Structures
Higher-Order Enhanced Pipeline
Temporal Higher-Order Structures
Formal Setting of Dynamic Link Prediction
Model Architecture: Encoding Higher-Order Structures
Model Architecture: Patching, Alignment and Concatenation
Model Architecture: Block Recurrent Transformer
Evaluation

Taught by

Scalable Parallel Computing Lab, SPCL @ ETH Zurich

Reviews

Start your review of HOT: Higher-Order Dynamic Graph Representation Learning with Efficient Transformers

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.