Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fine-Tuning Large Language Models: Impact on Knowledge Integration and Hallucinations

Discover AI via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn about groundbreaking research from Google exploring the relationship between fine-tuning large language models (LLMs) and hallucinations in this 27-minute video. Dive deep into a comprehensive study examining how introducing new factual knowledge during fine-tuning affects model behavior and accuracy. Explore the detailed methodology involving closed-book question answering, the ENTITYQUESTIONS dataset, and the hierarchical SliCK classification system. Discover key findings about LLMs' struggles with integrating new knowledge, the linear correlation between learning new information and increased hallucinations, and the effectiveness of early stopping in preventing factual errors. Understand practical implications for fine-tuning processes, including the importance of balancing known and unknown examples, optimal strategies for knowledge utilization, and risk management techniques. Master essential insights for maintaining model accuracy and reliability while implementing fine-tuning procedures in large language models.

Syllabus

New Trick for Fine-Tuning LLMs #airesearch

Taught by

Discover AI

Reviews

Start your review of Fine-Tuning Large Language Models: Impact on Knowledge Integration and Hallucinations

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.