Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to implement streaming for LangChain Agents and serve it through FastAPI in this comprehensive 28-minute tutorial. Progress from basic LangChain streaming to advanced techniques, including simple terminal streaming with LLMs, parsing stream outputs using Async Iterator streaming, and integrating with OpenAI's GPT-3.5-turbo model via LangChain's ChatOpenAI object. Explore custom callback handlers, FastAPI integration, and essential considerations for deploying streaming in production. Access accompanying code notebooks and FastAPI template code to enhance your learning experience and quickly apply these concepts in real-world scenarios.
Syllabus
Streaming for LLMs and Agents
Simple StdOut Streaming in LangChain
Streaming with LangChain Agents
Final Output Streaming
Custom Callback Handlers in LangChain
FastAPI with LangChain Agent Streaming
Confirming we have Agent Streaming
Custom Callback Handlers for Async
Final Things to Consider
Taught by
James Briggs