Learn how to enhance Large Language Models (LLMs) through Retrieval Augmented Generation (RAG) in this technical talk presented by Mary Grygleski from DataStax. Explore the limitations of pre-trained language foundation models like ChatGPT in accessing and manipulating up-to-date knowledge, and discover how RAG techniques can overcome these constraints by retrieving external data to augment prompts. Understand the cost-effectiveness and efficiency of RAG compared to pre-training or fine-tuning foundation models, and its role in reducing LLM hallucinations. Dive into practical implementation using an event-driven streaming approach with the open source LangStream library, learning how to integrate existing data streams into generative AI applications through prompt engineering and RAG patterns.
Overview
Syllabus
Boost LLMs with Retrieval Augmented Generation
Taught by
AICamp