Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fine-Tuning Giant Neural Networks on Commodity Hardware with Automatic Pipeline Model Parallelism

USENIX via YouTube

Overview

Explore a groundbreaking approach to fine-tuning giant neural networks on commodity hardware in this 14-minute conference talk from USENIX ATC '21. Delve into FTPipe, an innovative system that introduces a new dimension of pipeline model parallelism, making multi-GPU execution of fine-tuning tasks for massive neural networks accessible on standard equipment. Learn about the novel Mixed-pipe approach to model partitioning and task allocation, which allows for more flexible and efficient use of GPU resources without compromising accuracy. Discover how this technique achieves up to 3× speedup and state-of-the-art accuracy when fine-tuning giant transformers with billions of parameters, such as BERT-340M, GPT2-1.5B, and T5-3B, on commodity RTX2080-Ti GPUs. Gain insights into the potential of this technology to democratize access to state-of-the-art models pre-trained on high-end supercomputing systems.

Syllabus

USENIX ATC '21 - Fine-tuning giant neural networks on commodity hardware with automatic pipeline...

Taught by

USENIX

Reviews

Start your review of Fine-Tuning Giant Neural Networks on Commodity Hardware with Automatic Pipeline Model Parallelism

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.