Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fine-tuning Llama 2 for Tone or Style Using Shakespeare Dataset

Trelis Research via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to fine-tune the Llama 2 language model for tone or style using a custom dataset in this 18-minute video tutorial. Explore the process of adapting the model to mimic Shakespearean language as an example. Discover techniques for loading Llama 2 with bitsandbytes, implementing LoRA for efficient fine-tuning, and selecting appropriate target modules. Gain insights into setting optimal training parameters, including batch size, gradient accumulation, and warm-up settings. Master the use of AdamW optimizer and learn to evaluate training loss effectively. Troubleshoot common issues in Google Colab and run inference with your newly fine-tuned model. Access additional resources for embedding creation, supervised fine-tuning, and advanced scripts to enhance your language model customization skills.

Syllabus

How to fine tune on a custom dataset
What dataset should I use for fine-tuning?
Fine-tuning in Google Colab
Loading Llama 2 with bitsandbytes
Fine-tuning with LoRA
Target modules for fine-tuning
Loading data for fine-tuning
Training Llama 2 with a validation set
Setting training parameters for fine-tuning
Choosing batch size for training
Setting gradient accumulation for training
Using an eval dataset for training
Setting warm-up parameters for training
Using AdamW for optimisation
Fix for when commands don't work in Colab
Evaluating training loss
Running inference after training

Taught by

Trelis Research

Reviews

Start your review of Fine-tuning Llama 2 for Tone or Style Using Shakespeare Dataset

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.