Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Aligning LLM-Assisted Evaluation with Human Preferences

Association for Computing Machinery (ACM) via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn about the critical challenges and methodologies of validating Large Language Model (LLM) outputs in this 17-minute conference talk from the 37th Annual ACM Symposium on User Interface Software and Technology (UIST 2024). Explore the complex relationship between LLM-assisted evaluation methods and human preferences, examining how these automated validation systems align with human judgment. Delve into key questions about the reliability and accuracy of using LLMs to evaluate other LLMs' outputs, while gaining insights into the latest research findings presented at this prestigious ACM symposium in Pittsburgh.

Syllabus

Who Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs with Human Preferences

Taught by

ACM SIGCHI

Reviews

Start your review of Aligning LLM-Assisted Evaluation with Human Preferences

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.