Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Reliable Hallucination Detection in Large Language Models

MLOps.community via YouTube

Overview

Explore reliable hallucination detection techniques for large language models in this 35-minute AI in Production talk by Jiaxin Zhang. Delve into the critical aspects of understanding trustworthiness in modern language models by examining existing detection approaches based on self-consistency. Discover two types of hallucinations stemming from question-level and model-level issues that cannot be effectively identified through self-consistency checks alone. Learn about the novel sampling-based method called semantic-aware cross-check consistency (SAC3), which expands on the principle of self-consistency checking. Understand how SAC3 incorporates additional mechanisms to detect both question-level and model-level hallucinations by leveraging semantically equivalent question perturbation and cross-model response consistency checking. Gain insights from extensive empirical analysis demonstrating SAC3's superior performance in detecting non-factual and factual statements across multiple question-answering and open-domain generation benchmarks.

Syllabus

Reliable Hallucination Detection in Large Language Models // Jiaxin Zhang // AI in Production Talk

Taught by

MLOps.community

Reviews

Start your review of Reliable Hallucination Detection in Large Language Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.