Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

DataCamp

Developing LLM Applications with LangChain

via DataCamp

Overview

Discover how to build AI-powered applications using LLMs, prompts, chains, and agents in LangChain.

Revolutionize your applications by harnessing the power of the LangChain framework for creating applications based on large language models (LLMs)! One of the major challenges for developing applications in the age of generative AI is integrating models, data sources, prompts, and other components from different providers into a single application. The LangChain framework provides a single, unified syntax for putting all these pieces together to allow you to integrate LLMs more seamlessly into your projects. Whether you're a seasoned developer or just getting started, this course will equip you with the knowledge and skills to build dynamic and intelligent applications that leverage the limitless capabilities of LangChain. Join us on this transformative journey and redefine how you create applications powered by language models.

Syllabus

  • Introduction to LangChain & Chatbot Mechanics
    • Welcome to the LangChain framework for building applications on LLMs! You'll learn about the main components of LangChain, including models, chains, agents, prompts, and parsers. You'll create chatbots using both open-source models from Hugging Face and proprietary models from OpenAI, create prompt templates, and integrate different chatbot memory strategies to manage context and resources during conversations.
  • Chains and Agents
    • Time to level up your LangChain chains! You'll learn to use the LangChain Expression Language (LCEL) for defining chains with greater flexibility. You'll create sequential chains, where inputs are passed between components to create more advanced applications. You'll also begin to integrate agents, which use LLMs for decision-making.
  • Retrieval Augmented Generation (RAG)
    • One limitation of LLMs is that they have a knowledge cut-off due to being trained on data up to a certain point. In this chapter, you'll learn to create applications that use Retrieval Augmented Generation (RAG) to integrate external data with LLMs. The RAG workflow contains a few different processes, including splitting data, creating and storing the embeddings using a vector database, and retrieving the most relevant information for use in the application. You'll learn to master the entire workflow!

Taught by

Jonathan Bennion

Reviews

4.6 rating at DataCamp based on 13 ratings

Start your review of Developing LLM Applications with LangChain

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.