Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to build a chatbot using Retrieval Augmented Generation (RAG) in this comprehensive video tutorial. Explore the entire process from start to finish, utilizing OpenAI's gpt-3.5-turbo Large Language Model (LLM) as the core engine. Implement the chatbot with LangChain's ChatOpenAI class, leverage OpenAI's text-embedding-ada-002 for embedding, and use Pinecone vector database as the knowledge base. Gain insights into RAG pipelines, understand the challenges of hallucinations in LLMs, and discover techniques to reduce them. Follow along as the tutorial guides you through adding context to prompts, building a vector database, and integrating RAG into your chatbot. Test the final RAG chatbot and learn important considerations when implementing RAG in your projects.
Syllabus
Chatbots with RAG
RAG Pipeline
Hallucinations in LLMs
LangChain ChatOpenAI Chatbot
Reducing LLM Hallucinations
Adding Context to Prompts
Building the Vector Database
Adding RAG to Chatbot
Testing the RAG Chatbot
Important Notes when using RAG
Taught by
James Briggs