Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Characterizing Communication in Distributed Parameter-Efficient Fine-Tuning for LLMs

HOTI - Hot Interconnects Symposium via YouTube

Overview

Watch a 29-minute technical presentation from the HOTI Hot Interconnects Symposium exploring the communication patterns and characteristics in distributed Parameter-Efficient Fine-Tuning (PEFT) approaches for Large Language Models. Led by researchers Nawras Alnaasan, Horng-Ruey Huang, Aamir Shafi, Hari Subramoni and Dhabaleswar K. Panda, and chaired by AMD's Shelby Lockhart, learn about the networking and interconnect challenges involved in efficiently fine-tuning massive language models across distributed systems. Gain insights into optimizing communication overhead and scaling PEFT methods as part of the Networks for Large Language Models technical paper session.

Syllabus

Day 1 09:00: Characterizing Communication in Distributed Parameter-Efficient-Fine-Tuning for LLMs

Taught by

HOTI - Hot Interconnects Symposium

Reviews

Start your review of Characterizing Communication in Distributed Parameter-Efficient Fine-Tuning for LLMs

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.