Characterizing Communication in Distributed Parameter-Efficient Fine-Tuning for LLMs
HOTI - Hot Interconnects Symposium via YouTube
Overview
Watch a 29-minute technical presentation from the HOTI Hot Interconnects Symposium exploring the communication patterns and characteristics in distributed Parameter-Efficient Fine-Tuning (PEFT) approaches for Large Language Models. Led by researchers Nawras Alnaasan, Horng-Ruey Huang, Aamir Shafi, Hari Subramoni and Dhabaleswar K. Panda, and chaired by AMD's Shelby Lockhart, learn about the networking and interconnect challenges involved in efficiently fine-tuning massive language models across distributed systems. Gain insights into optimizing communication overhead and scaling PEFT methods as part of the Networks for Large Language Models technical paper session.
Syllabus
Day 1 09:00: Characterizing Communication in Distributed Parameter-Efficient-Fine-Tuning for LLMs
Taught by
HOTI - Hot Interconnects Symposium