Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

ORPO: Monolithic Preference Optimization without Reference Model

Yannic Kilcher via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive analysis of the ORPO (Odds Ratio Preference Optimization) algorithm, a groundbreaking approach to language model preference alignment without the need for a reference model. Delve into the paper's key findings, which demonstrate how ORPO eliminates the necessity for a separate preference alignment phase in language model training. Examine the empirical and theoretical evidence supporting the effectiveness of the odds ratio in contrasting favored and disfavored generation styles during supervised fine-tuning. Learn how ORPO, when applied to models like Phi-2, Llama-2, and Mistral, achieves state-of-the-art performance on benchmarks such as AlpacaEval2.0, IFEval, and MT-Bench, surpassing larger language models. Gain insights into the crucial role of supervised fine-tuning in preference alignment and understand how ORPO's innovative approach simplifies the process while maintaining high performance.

Syllabus

ORPO: Monolithic Preference Optimization without Reference Model (Paper Explained)

Taught by

Yannic Kilcher

Reviews

Start your review of ORPO: Monolithic Preference Optimization without Reference Model

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.