Pre-Training BERT from Scratch for Domain-Specific Knowledge Using PyTorch - Part 51
Discover AI via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn to pre-train a BERT (Bidirectional Encoder Representations from Transformers) model from scratch in this comprehensive Python tutorial using PyTorch for domain-specific data applications. Master the process of training an optimized Tokenizer, designing custom BERT architectures, and implementing pre-training with a masked Language Model Head (MLM). Explore techniques for defining custom vocabulary sizes ranging from 8K to 60K tokens, configuring BERT architecture depths up to 96 layers, and optimizing GPU training for domain-specific knowledge encoding. Gain hands-on experience with transformer-based machine learning for natural language processing, and discover how to leverage the pre-trained model to build a SBERT (Sentence Transformers) model for Neural Information Retrieval systems. Follow along with provided code examples in Google Colab to implement tokenization, model configuration, pretraining tasks, and evaluate training results through practical demonstrations.
Syllabus
Introduction
Downloading data sets
Tokenization
Tokenizer
Fast implementation
Fast tokenizer
Encoding
Training Data Set
Bird Model
Bird Model Configuration
Bird Model Pretraining
Masking Task
Training Arguments
Training Example
Training Results
Training Loss
Expert Model
Taught by
Discover AI