Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

IBM

Project: Generative AI Applications with RAG and LangChain

IBM via Coursera

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Get ready to put all your gen AI engineering skills into practice! This guided project will test and apply the knowledge and understanding you’ve gained throughout the previous courses in the program. You will build your own real-world gen AI application. During this course, you will fill the final gaps in your knowledge to extend your understanding of document loaders from LangChain. You will then apply your new skills to uploading your own documents from various sources. Next, you will look at text-splitting strategies and use them to enhance model responsiveness. Then, you will use watsonx to embed documents, a vector database to store document embeddings, and LangChain to develop a retriever to fetch documents. As you work through your project, you will also implement RAG to improve retrieval, create a QA bot, and set up a simple Gradio interface to interact with your models. By the end of the course, you will have a hands-on project that provides engaging evidence of your generative AI engineering skills that you can talk about in interviews. If you’re ready to add some real-world experience to your portfolio, enroll today and fuel your AI engineering career.

Syllabus

  •  Document Loader Using LangChain 
    • In this module, you will learn all about document loaders from LangChain and then use that knowledge to load your document from various sources. You will also explore the various text splitting strategies with RAG and LangChain and apply them to enhance model responsiveness. Hands-on labs will provide you an opportunity to practice loading documents as well as implement the text-splitting techniques you have learned.
  • RAG Using LangChain
    • In this module, you will learn how to store embeddings using a vector store and how to use Chroma DB to save embeddings. You’ll gain insights into LangChain retrievers like the Vector Store-Based, Multi-Query, Self-Query, and Parent Document Retriever. In hands-on labs, you’ll prepare and preprocess documents for embedding and use watsonx.ai to generate embeddings for your documents. You’ll use vector databases such as Chroma DB and FAISS to store embeddings generated from textual data using LangChain. Finally, you’ll use various retrievers to efficiently extract relevant document segments from text using LangChain.
  • Create a QA Bot to Read Your Document
    • In this module, you will learn how to implement RAG to improve retrieval. You will become familiar with Gradio and how to set up a simple Gradio interface to interact with your models. You will also learn how to construct a QA bot to answer questions from loaded documents using LangChain and LLMs. Using hands-on labs, you will have the opportunity to practice setting up a Gradio interface, as well as constructing a QA bot. In the final project, you will build an AI application using RAG and LangChain.

Taught by

Kang Wang and Wojciech 'Victor' Fulmyk

Reviews

Start your review of Project: Generative AI Applications with RAG and LangChain

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.