Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive conference talk featuring Shreya Shankar from UC Berkeley, who delves into the paper "Aligning LLM-Assisted Evaluation of LLM Outputs with Human Preferences." Discover EvalGen, an innovative interface that provides automated assistance in generating evaluation criteria and implementing assertions for Large Language Models (LLMs). Gain insights into the research methodology, findings, and implications of this groundbreaking study. Learn about the challenges and solutions in aligning LLM-assisted evaluations with human preferences, and understand the potential impact on the field of artificial intelligence. Engage with cutting-edge AI research and industry trends through additional resources, including The Deep Dive newsletter and Unify's blog posts on AI deployment stack.
Syllabus
Aligning LLM-Assisted Evaluation of LLM Outputs with Human Preferences Explained
Taught by
Unify