Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Indian Institute of Technology Bombay

Distributed Optimization and Machine Learning

Indian Institute of Technology Bombay and NPTEL via Swayam

Overview

ABOUT THE COURSE: Centralized access to information and its subsequent processing is often computationally prohibitive over large networks due to communication overhead and the scale of the problem. Consequently, such systems rely on control and optimization algorithms that are fully distributed or even decentralized in nature. This course will provide a comprehensive overview of design and analysis of distributed optimization algorithms and their applications to machine learning. The aim is to revisit classical control and optimization algorithms for centralized optimization and discuss how these can be extended to distributed setting to accommodate the effects of communication constraints, network topology, computational resources, and robustness. Topics include graph theory, iterative methods for convex problems, synchronous and asynchronous setups, consensus algorithms, and distributed machine learning. We will also explore some recent literature in this area that exploits control theory for design of accelerated distributed optimization algorithms.INTENDED AUDIENCE: MTech and PhD students in broad areas of optimization and data sciencePREREQUISITES: A background in convex optimization and differential equations is preferredINDUSTRY SUPPORT: Tata Consultancy Services, Microsoft, Google, Amazon, IBM, etc.

Syllabus

Week 1:
1. Introduction to Distributed Optimization
2. Mathematical Optimization, Convex Sets and Convex Functions
Week 2:
1. Strong Convexity and Its Implications
2. Constrained Optimization Problems: Primal and Lagrangian Dual
Week 3:
1. KKT Conditions and Primal/Dual Methods
2. Analysis of Gradient Descent
Week 4:
1. Analysis of Accelerated Optimization Algorithms
2. Optimization Algorithms as Dynamical Systems and Introduction to Stability Theory
Week 5:
1. Lyapunov Analysis of Gradient Flows
2. Gradient Flows for Equality Constrained Optimization and Saddle-Point Problems
Week 6:
1. Accelerated Gradient Flows
2. Augmented Lagrangian and Method of Multipliers
Week 7:
1. ADMM (Alternating Direction Method of Multipliers)
2. Dual Ascent and Dual Decomposition
Week 8:
1. Introduction to Graph Theory
2. Distributed Consensus
Week 9:
1. Continuous-Time Analysis of Consensus Algorithms
2. Distributed Optimization Problem (Economic Dispatch Problem)
Week 10:
1. Distributed Optimization Algorithms
2. Introduction to Neural Networks and Ring-Allreduce Algorithm
Week 11:
1. Introduction to Federated Learning
2. Data Heterogeneity in Federated Learning
Week 12:
1. Computational Heterogeneity in Federated Learning
2. Robustness in Federated Learning

Taught by

Prof. Mayank Baranwal

Tags

Reviews

Start your review of Distributed Optimization and Machine Learning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.