Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Prompting Language Models Improves Quoting from Pre-Training Data - EACL 2024

Center for Language & Speech Processing(CLSP), JHU via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 10-minute conference talk presented by Marc Marone at EACL 2024, discussing the paper "According to …: Prompting Language Models Improves Quoting from Pre-Training Data." Delve into the innovative "according-to prompting" technique, designed to enhance the factual accuracy of Large Language Models (LLMs) by grounding their responses in previously observed text. Learn about the novel QUIP-Score evaluation metric, which measures how well model-generated answers align with underlying text corpora. Examine experiments conducted on Wikipedia, PubMed, and U.S. legal tax code, demonstrating improved grounding and task performance. Discover how LLMs can increase or decrease grounded generations on request, offering potential solutions to combat hallucination and fake information generation in AI language models.

Syllabus

‘‘According to …’’: Prompting Language Models Improves Quoting from Pre-Training Data -- EACL 2024

Taught by

Center for Language & Speech Processing(CLSP), JHU

Reviews

Start your review of Prompting Language Models Improves Quoting from Pre-Training Data - EACL 2024

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.