Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fine-tuning LLMs Without Maxing Out Your GPU - LoRA for Parameter-Efficient Training

Data Centric via YouTube

Overview

Learn how to utilize LoRA (Low Rank Adapters) for parameter-efficient fine-tuning of large language models in this 47-minute video. Follow along as the instructor demonstrates fine-tuning RoBERTa to classify consumer finance complaints using Google Colab with a V100 GPU. Gain insights into the end-to-end process, including access to a detailed notebook and technical blog. Discover how to optimize your GPU usage while achieving effective model fine-tuning. Explore additional resources on building LLM-powered applications, understanding precision and recall, and booking consultations for further guidance.

Syllabus

Fine-tune your LLMs, Without Maxing out Your GPU!

Taught by

Data Centric

Reviews

Start your review of Fine-tuning LLMs Without Maxing Out Your GPU - LoRA for Parameter-Efficient Training

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.