Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Transformers and the End of Inductive Bias in Neural Networks

Neural Breakdown with AVB via YouTube

Overview

Explore a 12-minute video lecture that examines how Transformer architectures revolutionized deep learning by breaking away from traditional inductive biases found in CNNs and RNNs. Learn about the fundamental shift from specialized neural networks with built-in assumptions (like locality bias in CNNs and recency bias in RNNs) to the more generalized, data-driven approach of Transformers. Delve into the role of attention mechanisms in deep learning while building upon concepts from previous discussions on attention and self-attention. Drawing from seminal research papers, discover why Transformers opted for a more flexible architecture despite its increased data requirements, marking a significant departure from conventional neural network design philosophy.

Syllabus

Here is how Transformers ended the tradition of Inductive Bias in Neural Nets

Taught by

Neural Breakdown with AVB

Reviews

Start your review of Transformers and the End of Inductive Bias in Neural Networks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.