Overview
Explore PyTorch, a powerful deep learning framework, in this 36-minute conference talk from Strange Loop. Dive into the design and challenges of PyTorch, focusing on its fast, dynamic neural networks and high-performance capabilities for GPUs and CPUs. Learn about the framework's mix of Python and C/C++ implementation, and discover why its dynamic nature is crucial for cutting-edge AI research. Gain insights into the Tensor compiler that powers PyTorch, enabling on-the-fly operation fusion for enhanced speed. Examine topics such as machine translation, adversarial networks, computation graph toolkits, and seamless GPU tensors. Understand the benefits of compilation and tracing JIT in the context of deep learning frameworks.
Syllabus
Intro
Overview of the talk
Machine Translation
Adversarial Networks
Adversarial Nets
Chained Together
Trained with Gradient Descent
Computation Graph Toolkits Declarative Toolkits
Imperative Toolkits
Seamless GPU Tensors
Neural Networks
Python is slow
Types of typical operators
Add - Mul A simple use-case
High-end GPUs have faster memory
GPUs like parallelizable problems
Compilation benefits
Tracing JIT
Taught by
Strange Loop Conference