Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fixing LLM Hallucinations with Retrieval Augmentation in LangChain

James Briggs via YouTube

Overview

Learn how to address the data freshness problem in Large Language Models (LLMs) through retrieval augmentation using LangChain and Pinecone vector database. Explore techniques to retrieve relevant information from external knowledge bases, enabling LLMs to access up-to-date information beyond their training data. Discover the process of data preprocessing, creating embeddings with OpenAI's Ada 002, setting up a Pinecone vector database, indexing data, and implementing generative question-answering with LangChain. Gain insights into adding citations to generated answers and understand the importance of retrieval augmentation in enhancing LLM performance.

Syllabus

Hallucination in LLMs
Types of LLM Knowledge
Data Preprocessing with LangChain
Creating Embeddings with OpenAI's Ada 002
Creating the Pinecone Vector Database
Indexing Data into Our Database
Querying with LangChain
Generative Question-Answering with LangChain
Adding Citations to Generated Answers
Summary of Retrieval Augmentation in LangChain

Taught by

James Briggs

Reviews

Start your review of Fixing LLM Hallucinations with Retrieval Augmentation in LangChain

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.