Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Transformers Explained - Part 1: Generative Music AI

Valerio Velardo - The Sound of AI via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Dive into a comprehensive video lecture on transformer architectures, focusing on their application in generative music AI. Explore the intuition, theory, and mathematical formalization behind transformers, which have become dominant in deep learning across various fields. Gain insights into the encoder structure, self-attention mechanisms, multi-head attention, positional encoding, and feedforward layers. Follow along with step-by-step explanations of each component, including visual recaps and key takeaways. Enhance your understanding of this powerful deep learning architecture and its potential in audio and music processing.

Syllabus

Intro
Context
The intuition
Encoder
Encoder block
Self-attention
Matrices
Input matrix
Query, key, value matrices
Self-attention formula
Self-attention: Step 1
Self-attention: Step 2
Self-attention: Step 3
Self-attention: Step 4
Self-attention: Visual recap
Multi-head attention
The propblem of sequence order
Positional encoding
How to compute positional encoding
Feedforward layer
Add & norm layer
Deeper meaning of encoder components
Encoder step-by-step
Key takeaways
What next?

Taught by

Valerio Velardo - The Sound of AI

Reviews

Start your review of Transformers Explained - Part 1: Generative Music AI

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.