Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

FASTER Code for Supervised Fine-Tuning and DPO Training with UNSLOTH

Discover AI via YouTube

Overview

Learn to accelerate Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) training for Large Language Models through a detailed video tutorial that explores two free Jupyter notebooks. Dive into practical implementations using HuggingFace-compatible scripts for training LLama or Mistral models, with step-by-step demonstrations of the free version's capabilities. Access comprehensive examples including Alpaca with Mistral 7b implementation and DPO Zephyr training, complete with direct links to ready-to-use Google Colab notebooks for hands-on experimentation in AI model training and optimization.

Syllabus

FASTER Code for SFT + DPO Training: UNSLOTH

Taught by

Discover AI

Reviews

Start your review of FASTER Code for Supervised Fine-Tuning and DPO Training with UNSLOTH

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.