In this course, we start with the concepts and use of Large Language Models, exploring popular LLMs such as OpenAI GPT and Google Gemini. We will understand Language Embeddings and Vector Databases, and move on to learn LangChain LLM Framework to develop RAG applications combining the powers of LLMs and LLM Frameworks.
The capabilities of LLMs are not to be kept confined within the tools like ChaGPT or Google Gemini or Anthropic Claude. You can leverage the powerful Natural Language Capabilities of LLMs applied on your organizational data to create amazing automations and applications that are called Retrieval Augmented Generation or RAG Applications.
Some of the key components of the course are learning prompt Engineering for RAG Applications, working with Agents, Tools, Documents, Loaders, Splitters, Output Parsers and so on, which are essential ingredients of RAG Applications.
Participants should have a basic understanding of Python programming and a foundational knowledge of Large Language Models (LLMs) to make the most of this course.
By the end of this course, you'll be able to develop RAG applications using Large Language Models, LangChain, and Vector Databases. You will learn to write effective prompts, understand models and tokens, and apply vector databases to automate workflows. You'll also grasp key LangChain concepts to build simple to medium complexity RAG applications.
Overview
Syllabus
- Introduction to Retrieval Augmented Generation (RAG)
- In this course, we start with the concepts and use of Large Language Models, exploring popular LLMs such as OpenAI GPT and Google Gemini. We will understand Language Embeddings and Vector Databases, and move on to learn LangChain LLM Framework to develop RAG applications combining the powers of LLMs and LLM Frameworks.
Taught by
Manas Dasgupta