Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive analysis of gradient surgery for multi-task learning in this informative video. Delve into the challenges of multi-task learning, particularly when gradients of different tasks have significantly different magnitudes or conflicting directions. Learn about PCGrad, a method that projects conflicting gradients while maintaining optimality guarantees. Examine the three conditions in the multi-task optimization landscape that cause detrimental gradient interference and discover a general approach to avoid such interference. Investigate how this gradient surgery technique projects a task's gradient onto the normal plane of conflicting task gradients, leading to substantial improvements in efficiency and performance across challenging multi-task supervised and reinforcement learning problems. Understand the model-agnostic nature of this approach and its potential to enhance previously-proposed multi-task architectures.
Syllabus
Introduction
What is multitask learning
Example
Loss Function
Theorems
Conditions
Evil Try Effect
MultiTask Learning
Taught by
Yannic Kilcher