Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

Reducing Hallucinations and Evaluating LLMs for Production

Linux Foundation via YouTube

Overview

Explore the challenges of evaluating Large Language Models (LLMs) and reducing hallucinations in their outputs in this informative conference talk. Gain insights into traditional evaluation methods like BLEU and F1 scores, as well as modern approaches such as Eleuther's AI evaluation framework. Examine the underlying causes of LLM hallucinations, including biases in training data and overfitting. Learn about open-source LLM validation modules and best practices for minimizing hallucinations to prepare LLMs for production use. Designed for practitioners, researchers, and enthusiasts with a basic understanding of language models, this talk provides valuable knowledge on improving LLM performance and reliability.

Syllabus

Reducing Hallucinations and Evaluating LLMs for Production - Divyansh Chaurasia, Deepchecks

Taught by

Linux Foundation

Reviews

Start your review of Reducing Hallucinations and Evaluating LLMs for Production

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.