Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore modular and composable transfer learning strategies in this informative lecture presented by Jonas Pfeiffer at USC Information Sciences Institute. Delve into adapter-based fine-tuning techniques for parameter-efficient transfer learning with large pre-trained transformer models. Discover how small neural network components introduced at each layer can encapsulate downstream task information while keeping pre-trained parameters frozen. Learn about the modularity and composability of adapters for improving target task performance and achieving zero-shot cross-lingual transfer. Examine the benefits of adding modularity during pre-training to mitigate catastrophic interference and address challenges in multilingual models. Gain insights from Pfeiffer's extensive research experience in modular representation learning across multi-task, multilingual, and multi-modal contexts.