Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Decoding on Graphs: Empowering LLMs with Knowledge Graphs Through Well-Formed Chains

Discover AI via YouTube

Overview

Watch a 29-minute research presentation exploring Decoding on Graphs (DoG), a groundbreaking framework developed by MIT and the University of Hong Kong that enhances Large Language Models' capabilities through Knowledge Graph integration. Learn how DoG employs "well-formed chains" - sequences of interconnected fact triplets - to improve question-answering tasks by ensuring LLMs generate responses that align with Knowledge Graph structures. Discover the implementation of graph-aware constrained decoding using trie data structures and beam search execution techniques that enable multiple reasoning paths while maintaining accuracy. Explore practical applications through examples, including Harvard Medical implementations, and understand how this framework outperforms existing methods in complex multi-hop reasoning scenarios. Delve into key concepts including subgraph retrievers, LLM-KG integration agents, linear graph forms, and constrained decoding mechanisms that make this innovative approach both faithful and effective.

Syllabus

Augment LLMs with Knowledge Graphs
Subgraph retrievers
Agents for Integrating LLM and KG
NEW IDEA by MIT & HK Univ
Example of Decode on Graphs
Implementation PROMPT DoG
Linear graph forms
Graph aware constrained decoding
Harvard MED Agents for LLM on KG

Taught by

Discover AI

Reviews

Start your review of Decoding on Graphs: Empowering LLMs with Knowledge Graphs Through Well-Formed Chains

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.