Amazon SageMaker helps data scientists prepare, build, train, deploy, and monitor machine learning (ML) models. SageMaker brings together a broad set of capabilities, including access to distributed training libraries, open source models, and foundation models (FMs). This course introduces experienced data scientists to the challenges of building language models and the different storage, ingestion, and training options to process a large text corpus. The course also discusses the challenges of deploying large models and customizing foundational models for generative artificial intelligence (generative AI) tasks using Amazon SageMaker Jumpstart.
- Course level: Advanced
- Duration: 5.5 hours
Activities
This course includes text instruction, illustrative graphics, knowledge check questions, and video demonstrations of labs you can run in your own Amazon Web Services (AWS) account.
Course objectives
After completing this course, data scientists can confidently build, train, and tune
performant language models on AWS using SageMaker.
In this course, you will learn to do the following:
- Apply best practices for storing and ingesting a large amount of text data to support distributed training
- Explore data parallelism and model parallelism libraries to support distributed training on SageMaker
- Explain the options available on SageMaker to improve training performance, such as Amazon SageMaker Training Compiler and Elastic Fabric Adapter (EFA)
- Explore large language model (LLM) optimization techniques for effective model deployment
- Demonstrate how to fine-tune foundational models available on SageMaker Jumpstart
Intended audience
This course is intended for the following roles:
- Data scientists
- ML engineers
Prerequisites
We recommend that attendees of this course have:
- More than 1 year of experience with natural language processing (NLP)
- More than 1 year of experience with training and tuning language models
- Intermediate-level proficiency in Python language programming
- AWS Technical Essentials
- Amazon SageMaker Studio for Data Scientists
Course outline
Course Series Introduction
Section 1: Introduction
- Introduction to Building Language Models on AWS
Section 2: Large Language Model Basics
- Types of Large Language Models
- Common Generative AI Use Cases
Section 3: Course Series Outline
- Topics Covered in Future Modules
Â
Addressing the Challenges of Building Language Models
Section 1: Common Challenges
- Common LLM Practitioner Challenges
Section 2: Multi-Machine Training Solutions
- Scaling LLMs with Distributed Training
- Applying Data Parallelism Techniques
- Applying Model Parallelism Techniques
Section 3: Performance Optimization Solutions
- Performance Optimization Techniques
- Using Purpose-Built Infrastructure
Section 4: Wrap Up
- Module Assessment
Â
Using Amazon SageMaker for Training Language Models
Section 1: Configuring SageMaker Studio
- SageMaker Basics
- Setting up a SageMaker Studio Domain
Section 2: SageMaker Infrastructure
- Choosing Compute Instance Types
Section 3: Working with the SageMaker Python SDK
- SageMaker Python SDK Basics
- Training and Deploying Language Models with the SageMaker Python SDK
Section 4: Wrap Up
- Module Assessment
Â
Demonstration - Setting up Amazon SageMaker Studio
Â
Ingesting Language Model Data
Section 1: Preparing Data
- Data Management Overview
- Preparing Data for Ingestion
Section 2: Analyzing Data Ingestion Options
- Loading Data with the SageMaker Python SDK
- Ingesting Data from Amazon S3
- Ingesting Data with FSx for Lustre
- Additional Data Ingestion Options
- Data Ingestion and Storage Considerations
Section 3: Wrap Up
- Module Assessment
Training Large Language Models
Section 1: Creating a SageMaker Training Job
- Launching SageMaker Training Jobs
- Modifying Scripts for Script Mode
Section 2: Optimizing Your SageMaker Training Job
- Monitoring and Troubleshooting
- Optimizing Computational Performance
- SageMaker Training Features for Language Model Training
Section 3: Using Distributed Training on SageMaker
- SageMaker Distributed Training Support
- Using the SageMaker Distributed Data Parallel Library
- Using the SageMaker Model Parallel Library
- Using the SageMaker Model Parallel Library and Sharded Data Parallelism
- Training with the EFA
Section 4: Compiling Your Training Code
- Using the SageMaker Training Compiler
Section 5: Wrap Up
- Module Assessment
Demonstration - Training Your First Language Model with Amazon SageMaker
Â
Demonstration -Â Data Parallel on SageMaker Training with PyTorch Lightning
Â
Demonstration - Fine-tune GPT-2 with Near-Linear Scaling Using the Sharded Data Parallelism Technique in the Amazon SageMaker Model Parallelism Library
Â
Deploying Language Models
Section 1: Deploying a Model in SageMaker
- Introduction to SageMaker Deployment
- Choosing a SageMaker Deployment Option
Section 2: Deploying Models for Inference
- Real-Time Inference Overview
- Using the SageMaker Python SDK for Model Deployment
- Using the SageMaker Inference Recommender
Section 3: Deploying Large Language Models for Inference
- Optimization Techniques
- Model Compression Techniques
- Model Partitioning
- Optimized Kernels and Compilation
- Deploying with SageMaker LMI Containers
Section 4: Additional Considerations
- Other Considerations When Deploying Models on SageMaker
Section 5: Wrap Up
- Module Assessment
Â
Demonstration - Introduction to LLM Hosting on Amazon SageMaker with DeepSpeed Containers
Â
Customizing Foundation Language Models for Generative AI Tasks
Section 1: Introduction
- Introduction to Foundation Models
Section 2: Using SageMaker JumpStart
- Getting Started with SageMaker JumpStart
- Deploying SageMaker JumpStart Models with the SageMaker Python SDK
- Selecting an FM
Section 3: Customizing FMs
- Prompt Engineering
- Fine-tune JumpStart Models with the SageMaker Python SDK
Section 4: Retrieval Augmented Generation (RAG)
- Using Retrieval Augmented Generation (RAG)
Section 5: Wrap Up
- Module Assessment
Demonstration - Deploy a FLAN-T5 Model for Text Generation Tasks Using Amazon SageMaker JumpStart
Â
Call to Action and Additional Resources
Section 1: Review
- Topics Covered in This Course Series
Section 2: Wrap Up
- Resources, Recap, and Next Steps