Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Differentiable Functional Programming

Scala Days Conferences via YouTube

Overview

Explore differentiable functional programming in this Scala Days Berlin 2018 conference talk. Dive into parameterised functions, supervised learning, and gradient descent techniques. Understand deep learning as supervised learning of parameterised functions through gradient descent. Examine tensor multiplication, non-linearity, and algorithms for calculating gradients. Compare mathematician's and programmer's approaches to differentiation, including symbolic differentiation and automatic differentiation. Learn about dual numbers, forward-mode scaling, and the chain rule. Discover the importance of expressive type systems and compilation for GPU performance in implementing these concepts.

Syllabus

Intro
Parametrised functions Supervised learning Gradient descent
Calculate gradient for current parameters
Deep learning is supervised learning of parameterised functions by gradient descent
Tensor multiplication and non-linearity
Algorithms for calculating gradients
Composition of Derivatives
Mathematician's approach
Symbolic differentiation
Programmer's approach
Automatic differentiation approach
Calculate with dual numbers
Forward-mode scales in the size of the input dimension
Chain rule doesn't care about order
Tensor dimensions must agree
Solution: expressive type systems
Need compilation (to GPU) for performance

Taught by

Scala Days Conferences

Reviews

Start your review of Differentiable Functional Programming

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.