Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore an innovative approach to federated fine-tuning of Large Language Models (LLMs) in this 20-minute conference talk from USENIX ATC '24. Dive into FwdLLM, a novel protocol designed to enhance efficiency in Federated Learning (FL) for LLMs on mobile devices. Learn how the researchers from Beijing University of Posts and Telecommunications address the challenge of balancing LLM complexity with mobile resource constraints. Discover the key components of FwdFL, including backpropagation-free training methods, adaptive computational load allocation, and discriminative sampling of perturbed predictions. Gain insights into the significant advantages of this approach, such as faster convergence and reduced memory footprint, and understand how it enables federated billion-parameter LLMs on commercial off-the-shelf mobile devices for the first time.
Syllabus
USENIX ATC '24 - FwdLLM: Efficient Federated Finetuning of Large Language Models with Perturbed...
Taught by
USENIX