Decoding on Graphs: Empowering LLMs with Knowledge Graphs Through Well-Formed Chains
Discover AI via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Watch a 29-minute research presentation exploring Decoding on Graphs (DoG), a groundbreaking framework developed by MIT and the University of Hong Kong that enhances Large Language Models' capabilities through Knowledge Graph integration. Learn how DoG employs "well-formed chains" - sequences of interconnected fact triplets - to improve question-answering tasks by ensuring LLMs generate responses that align with Knowledge Graph structures. Discover the implementation of graph-aware constrained decoding using trie data structures and beam search execution techniques that enable multiple reasoning paths while maintaining accuracy. Explore practical applications through examples, including Harvard Medical implementations, and understand how this framework outperforms existing methods in complex multi-hop reasoning scenarios. Delve into key concepts including subgraph retrievers, LLM-KG integration agents, linear graph forms, and constrained decoding mechanisms that make this innovative approach both faithful and effective.
Syllabus
Augment LLMs with Knowledge Graphs
Subgraph retrievers
Agents for Integrating LLM and KG
NEW IDEA by MIT & HK Univ
Example of Decode on Graphs
Implementation PROMPT DoG
Linear graph forms
Graph aware constrained decoding
Harvard MED Agents for LLM on KG
Taught by
Discover AI