Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the process of fine-tuning a large language model (LLM) locally on an M-series Mac in this comprehensive tutorial video. Learn how to adapt Mistral 7b to respond to YouTube comments in the presenter's style. Dive into topics including the motivation behind local fine-tuning, an introduction to MLX, setting up the environment, and working with example code. Gain hands-on experience with inference using both un-finetuned and finetuned models, understand the QLoRA fine-tuning technique, and discover the intricacies of dataset formatting. Follow along as the presenter demonstrates running local training and provides insights on LoRA rank. Access additional resources, including a blog post, GitHub repository, and related videos to further enhance your understanding of LLM fine-tuning on Mac.
Syllabus
Intro -
Motivation -
MLX -
GitHub Repo -
Setting up environment -
Example Code -
Inference with un-finetuned model -
Fine-tuning with QLoRA -
Aside: dataset formatting -
Running local training -
Inference with finetuned model -
Note on LoRA rank -
Taught by
Shaw Talebi