Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn about groundbreaking research from Google exploring the relationship between fine-tuning large language models (LLMs) and hallucinations in this 27-minute video. Dive deep into a comprehensive study examining how introducing new factual knowledge during fine-tuning affects model behavior and accuracy. Explore the detailed methodology involving closed-book question answering, the ENTITYQUESTIONS dataset, and the hierarchical SliCK classification system. Discover key findings about LLMs' struggles with integrating new knowledge, the linear correlation between learning new information and increased hallucinations, and the effectiveness of early stopping in preventing factual errors. Understand practical implications for fine-tuning processes, including the importance of balancing known and unknown examples, optimal strategies for knowledge utilization, and risk management techniques. Master essential insights for maintaining model accuracy and reliability while implementing fine-tuning procedures in large language models.