Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intricacies of aligning LLM judges for improved evaluations in this comprehensive webinar. Delve into various evaluation strategies, focusing on LLM Judge alignment using a RAG pipeline as a case study. Learn to construct an effective evaluation system with LlamaIndex and harness W&B Weave for systematic assessment and annotation. Discover the importance of evaluation in LLM applications, compare frameworks like LlamaIndex and LangChain, and understand the role of Weights & Biases in LLM Ops. Gain insights into RAG technology, its pipeline components, and witness a live demonstration of building a retriever and query engine. Explore the integration of Weave, trace viewing, and customized evaluation techniques. Uncover best practices throughout the entire evaluation lifecycle in this hour-long session, concluding with final thoughts and a summary of key takeaways.
Syllabus
Introduction and overview
Importance of evaluation in LLM Applications
Frameworks: LlamaIndex vs LangChain
Weights & Biases in LLM Ops
What is RAG?
Components of a RAG pipeline
Demo time
Building the retriever and query engine
Integrating Weave and viewing Traces
Customized evaluation and comparison
Final thoughts and summary
Taught by
Weights & Biases