Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore efficient distributed deep learning techniques using MXNet in this 45-minute lecture by Anima Anandkumar from UC Irvine. Delve into practical considerations for machine learning, challenges in deploying large-scale learning, and declarative programming. Discover MXNet's mixed programming paradigm and hierarchical parameter server. Examine tensor contraction as a layer and learn about Amazon AI services like Rekognition for object, scene, and facial analysis, as well as Polly for voice quality and pronunciation. Gain insights into computational challenges in machine learning and strategies for writing parallel programs in this comprehensive talk from the Simons Institute's Computational Challenges in Machine Learning series.
Syllabus
Intro
PRACTICAL CONSIDERATIONS FOR MACHINE LEARNING
CHALLENGES IN DEPLOYING LARGE-SCALE LEARNING
DECLARATIVE PROGRAMMING
MXNET: MIXED PROGRAMMING PARADIGM
WRITING PARALLEL PROGRAMS IS HARD
HIERARCHICAL PARAMETER SERVER IN MXNET
TENSORS, DEEP LEARNING & MXNET
TENSOR CONTRACTION AS A LAYER
Introducing Amazon Al
Rekognition: Object & Scene Detection
Rekognition: Facial Analysis
Polly: A Focus On Voice Quality & Pronunciation
Taught by
Simons Institute