![](https://ccweb.imgix.net/https%3A%2F%2Fwww.classcentral.com%2Fimages%2Ficon-black-friday.png?auto=format&ixlib=php-4.1.0&s=fe56b83c82babb2f8fce47a2aed2f85d)
Overview
![](https://ccweb.imgix.net/https%3A%2F%2Fwww.classcentral.com%2Fimages%2Ficon-black-friday.png?auto=format&ixlib=php-4.1.0&s=fe56b83c82babb2f8fce47a2aed2f85d)
Learn how to evaluate the performance of language models (llms) such as OpenAI GPT-3 using Langchain evaluation with custom prompts and datasets from Huggingface. The course covers creating environments, pip installs, code reviews for llm QA, custom prompts for llm evaluation, code reviews for Agents with tool evaluation, and a final code review. The teaching method includes demonstrations, code reviews, and practical exercises. This course is intended for individuals interested in evaluating language models and agents using Langchain and custom prompts.
Syllabus
intro and demo
Creating environment and pip installs
Code review for llm QA
Custom prompt for llm eval
Code review for Agent with tool eval
Final Code review
Taught by
echohive