Overview
This course will teach you how to deploy and manage large language models (LLMs) in production using AWS services like Amazon Bedrock. By the end of the course, you will know how to:
Choose the right LLM architecture and model for your application using services.
Optimize cost, performance and scalability of LLMs on AWS using auto-scaling groups, spot instances and container orchestration
Monitor and log metrics from your LLM to detect issues and continuously improve quality
Build reliable and secure pipelines to train, deploy and update models using AWS services
Comply with regulations when deploying LLMs in production through techniques like differential privacy and controlled rollouts
This course is unique in its focus on real-world operationalization of large language models using AWS. You will work through hands-on labs to put concepts into practice as you learn. Whether you are a machine learning engineer, data scientist or technical leader, you will gain practical skills to run LLMs in production.
Syllabus
- Getting Started with Developing on AWS for AI
- This module, you will learn how to set up a Rust development environment, utilize the AWS SDK for Rust, and build AWS Lambda functions with Rust.
- AI Pair Programming from CodeWhisperer to Prompt Engineering
- CodeWhisperer writes code. You learn to guide it. Large language models crunch data, spit out content. Chain-of-thought prompts make models explain themselves. Craft prompts, shape outputs. Build CLI tools, bash functions. Use CodeWhisperer CLI to automate tasks. Fast, efficient coding with AI.
- Amazon Bedrock
- This module, learn Amazon Bedrock capabilities. Apply through model evaluations and customizations.
- Project Challenges
- In this module, you will challenge yourself to apply the concepts covered in the previous module and challenge yourself to apply what you learned in a new context.
Taught by
Noah Gift, Alfredo Deza, and Derek Wales