Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fine-tuning Flan-T5 LLM with HuggingFace Accelerate - Tutorial

Discover AI via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to fine-tune a Flan-T5 Large Language Model using HuggingFace Accelerate in this hands-on coding tutorial. Follow along with real-time implementation demonstrating how to prepare code for multi-GPU or multi-TPU environments. Master the process of fine-tuning a T5 model (encoder-decoder transformer stack) for specific downstream tasks using custom datasets. Explore essential components including Docker configuration, transformer setup, preprocessing steps, and Accelerate parameters optimization. Gain practical experience implementing HuggingFace's powerful acceleration tools while working within a free Google Colab notebook environment. Reference comprehensive documentation from the HuggingFace Accelerate library to deepen understanding of the fine-tuning process.

Syllabus

Intro
Docker
Transformer
Config File
Run Accelerate
Parameters
Preprocessing
Conclusion

Taught by

Discover AI

Reviews

Start your review of Fine-tuning Flan-T5 LLM with HuggingFace Accelerate - Tutorial

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.