Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fast Neural Kernel Embeddings for General Activations

Google TechTalks via YouTube

Overview

Explore the intricacies of neural kernel embeddings for general activations in this 34-minute Google TechTalk presented by Insu Han. Delve into the world of infinite width limits and their connections between neural networks and kernel methods. Discover how to overcome the limitations of kernel methods in large-scale learning settings, including their quadratic runtime and memory complexities. Learn about methods to work with general activations beyond the commonly analyzed ReLU, including exact dual activation expressions and effective approximation techniques. Examine a fast sketching method for approximating multi-layered Neural Network Gaussian Process (NNGP) kernel and Neural Tangent Kernel (NTK) matrices using truncated Hermite expansion. Understand the advantages of these methods, which are applicable to any dataset of points in ℝd, without the limitation of data points on the unit sphere. Explore subspace embedding for NNGP and NTK matrices with near input-sparsity runtime and near-optimal target dimension for homogeneous dual activation functions. Gain insights into the empirical results, showcasing a 106× speedup for approximate Convolutional Neural Tangent Kernel (CNTK) computation of a 5-layer Myrtle network on the CIFAR-10 dataset.

Syllabus

Fast Neural Kernel Embeddings for General Activations

Taught by

Google TechTalks

Reviews

Start your review of Fast Neural Kernel Embeddings for General Activations

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.