Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

DeBERTa - Decoding-Enhanced BERT with Disentangled Attention

Yannic Kilcher via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive video explanation of the DeBERTa (Decoding-enhanced BERT with Disentangled Attention) machine learning paper. Delve into the next iteration of BERT-style Self-Attention Transformer models, which surpasses RoBERTa in state-of-the-art performance on multiple NLP tasks. Learn about key improvements, including the disentangled attention mechanism and the use of relative positional encodings. Examine the model's architecture, efficiency in pretraining, and performance on downstream tasks. Follow along as the video breaks down complex concepts, presents experimental results, and discusses scaling up to 1.5 billion parameters. Gain insights into the paper's abstract, authors, and the model's impact on the SuperGLUE benchmark.

Syllabus

- Intro & Overview
- Position Encodings in Transformer's Attention Mechanism
- Disentangling Content & Position Information in Attention
- Disentangled Query & Key construction in the Attention Formula
- Efficient Relative Position Encodings
- Enhanced Mask Decoder using Absolute Position Encodings
- My Criticism of EMD
- Experimental Results
- Scaling up to 1.5 Billion Parameters
- Conclusion & Comments

Taught by

Yannic Kilcher

Reviews

Start your review of DeBERTa - Decoding-Enhanced BERT with Disentangled Attention

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.