Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

edX

Rust for Large Language Model Operations (LLMOps)

Pragmatic AI Labs via edX

Overview

This advanced course trains you for the cutting-edge of AI development by combining the power of Rust with Large Language Model Operations

  • Learn to build scalable LLM solutions using the performance of Rust
  • Master integrating Rust with LLM frameworks like HuggingFace Transformers
  • Integrate Rust with LLM frameworks like HuggingFace, Candle, ONNX

Get trained in the latest AI/ML innovations while mastering systems programming with Rust - your pathway to building state-of-the-art LLM applications.

  • Optimize LLM training/inference by leveraging Rust's parallelism and GPU acceleration
  • Build Rust bindings for seamless integration with HuggingFace Transformers
  • Convert and deploy BERT models to Rust apps via ONNX runtime
  • Utilize Candle for streamlined ML model building and training in Rust
  • Host and scale LLM solutions on AWS cloud infrastructure
  • Hands-on labs: Build chatbots, text summarizers, machine translation
  • Apply LLMOps DevOps practices - CI/CD, monitoring, security
  • Techniques for memory safety, multithreading, lock-free concurrency
  • Best practices for LLMOps reliability, scalability, cost optimization
  • Real-world projects demonstrating production-ready LLMOps expertise

Syllabus

Module 1: DevOps Concepts for MLOps (6 hours)

\- Instructor Intro (Video - 1 minute)

\- A Function, the Essence of Programming (Video - 6 minutes)

\- Operationalize Microservices (Video - 1 minute)

\- Continuous Integration for Microservices (Video - 6 minutes)

\- What is Makefile and how do you use it? (Video - 2 minutes)

\- What is DevOps? (Video - 2 minutes)

\- Kaizen methodology (Video - 4 minutes)

\- Infrastructure as Code for Continuous Delivery (Video - 2 minutes)

\- Responding to Compromised Resources and Workloads (Video - 4 minutes)

\- Designing and Implementing Monitoring and Alerting (Video - 1 minute)

\- Audit Network Security (Video - 1 minute)

\- Rust Secure by Design (Video - 4 minutes)

\- Preventing Data Races with Rust Compiler (Video - 3 minutes)

\- Using AWS Config for Security (Video - 4 minutes)

\- AWS Security Hub Demo (Video - 3 minutes)

\- Explain How to Secure Your Account with 2FA (Video - 3 minutes)

\- Understanding Access Permissions (Video - 4 minutes)

\- Repository Permission Levels Explained (Video - 2 minutes)

\- Repository Privacy Settings and Options (Video - 2 minutes)

\- Unveiling Key Concepts of the GitHub Ecosystem (Video - 3 minutes)

\- Demo: Implementing GitHub Actions (Video - 3 minutes)

\- Demo: GitHub Codespaces (Video - 6 minutes)

\- Demo: GitHub Copilot (Video - 8 minutes)

\- Source Code Resources (Reading - 10 minutes)

\- Infrastructure as code (Reading - 10 minutes)

\- Continuous integration (Reading - 10 minutes)

\- Continuous delivery (Reading - 10 minutes)

\- Automation and tooling (Reading - 10 minutes)

\- Shared responsibility (Reading - 10 minutes)

\- Identity and access management (Reading - 10 minutes)

\- Infrastructure protection (Reading - 10 minutes)

\- Incident response (Reading - 10 minutes)

\- External Lab: Use GitHub Actions and Codespaces (Reading - 10 minutes)

\- About two-factor authentication (Reading - 10 minutes)

\- Access permissions on GitHub (Reading - 10 minutes)

\- About Continuous Integration (Reading - 10 minutes)

\- About continuous deployment (Reading - 10 minutes)

\- Final Week-Reflections (Reading - 10 minutes)

\- DevOps Concepts for MLOps (Quiz - 30 minutes)

\- Lab: Using a Makefile with Rust (Ungraded Lab - 60 minutes)

\- Lab: Preventing Data Races in Rust (Ungraded Lab - 60 minutes)

Module 2: Rust Hugging Face Candle (4 hours)

\- Candle: A Minimalist ML Framework for Rust (Video - 2 minutes)

\- Using GitHub Codespaces for GPU Inference with Rust Candle (Video - 5 minutes)

\- VSCode Remote SSH development AWS Accelerated Compute (Video - 5 minutes)

\- Building Hello World Candle (Video - 2 minutes)

\- Exploring StarCoder: A State-of-the-Art LLM (Video - 5 minutes)

\- Using Whisper with Candle to Transcribe (Video - 5 minutes)

\- Exploring Remote Dev Architectures on AWS (Video - 2 minutes)

\- Advantages of Rust for LLMs (Video - 1 minute)

\- Serverless Inference (Video - 1 minute)

\- Rust CLI Inference (Video - 2 minutes)

\- Rust Chat Inference (Video - 1 minute)

\- Continuous Build of Binaries for LLMOps (Video - 2 minutes)

\- Chat Loop for StarCoder (Video - 2 minutes)

\- Invoking Rust Candle on AWS G5-Part One (Video - 4 minutes)

\- Invoking BigCode on AWS G5-Part Two (Video - 3 minutes)

\- rust-candle-demos (Reading - 10 minutes)

\- Configuring NVIDIA CUDA for your codespace (Reading - 10 minutes)

\- Getting Started Candle (Reading - 10 minutes)

\- Candle Examples (Reading - 10 minutes)

\- External Lab: Candle Hello World (Reading - 10 minutes)

\- External Lab: Run an LLM with Candle (Reading - 10 minutes)

\- Developer Guide cuDNN (Reading - 10 minutes)

\- cuDNN Webinar (Reading - 10 minutes)

\- Programming Tensor Cores in CUDA 9 (Reading - 10 minutes)

\- Tensor Ops Made Easier in cuDNN (Reading - 10 minutes)

\- External Lab: Using BigCode to Assist With Coding (Reading - 10 minutes)

\- StarCoder: A State-of-the-Art LLM for Code (Reading - 10 minutes)

\- Falcon LLM (Reading - 10 minutes)

\- Whisper LLM (Reading - 10 minutes)

\- Candle Structure (Reading - 10 minutes)

\- Final Week Reflection (Reading - 10 minutes)

\- Rust Hugging Face Candle (Quiz - 30 minutes)

Module 3: Key LLMOps Technologies (3 hours)

\- Introduction to Rust Bert (Video - 1 minute)

\- Installation and Setup (Video - 5 minutes)

\- Basic Syntax and Model Loading (Video - 2 minutes)

\- Building a sentiment analysis CLI (Video - 4 minutes)

\- Introduction to Rust PyTorch (Video - 1 minute)

\- Running a PyTorch Hello World (Video - 2 minutes)

\- PyTorch Pretrained (Video - 3 minutes)

\- Running PyTorch Pretrained (Video - 6 minutes)

\- Introduction to ONNX (Video - 1 minute)

\- ONNX Conversions (Video - 2 minutes)

\- Getting Started Rust Bert (Reading - 10 minutes)

\- External Lab: Translate a Spanish song to English (Reading - 10 minutes)

\- Rust Bert pipelines (Reading - 10 minutes)

\- ONNX Support Rust Bert (Reading - 10 minutes)

\- Loading pretrained and custom model weights (Reading - 10 minutes)

\- External Lab: Run a Pretrained model (Reading - 10 minutes)

\- Rust bindings for PyTorch (Reading - 10 minutes)

\- ONNX Concepts (Reading - 10 minutes)

\- ONNX with Python (Reading - 10 minutes)

\- Converters (Reading - 10 minutes)

\- ONNX Model Hub (Reading - 10 minutes)

\- Final Week-Reflections (Reading - 10 minutes)

\- External Lab: Use ONNX (Reading - 10 minutes)

\- Using Rust Bert (Quiz - 30 minutes)

Module 4: Key Generative AI Technologies (3 hours)

\- Extending Google Bard (Video - 4 minutes)

\- Exploring Google Colab with Bard (Video - 4 minutes)

\- Exploring Colab AI (Video - 4 minutes)

\- Exploring Gen App Builder (Video - 2 minutes)

\- Responsible AI with AWS Bedrock (Video - 4 minutes)

\- AWS Bedrock with Claude (Video - 7 minutes)

\- Summarizing text with Claude (Video - 5 minutes)

\- Using the AWS Bedrock API (Video - 1 minute)

\- Live Coding AWS CodeWhisperer Part One (Video - 6 minutes)

\- Live Coding AWS CodeWhisperer Part Two (Video - 14 minutes)

\- Live Coding AWS CodeWhisperer Part Three (Video - 7 minutes)

\- Using AWS CodeWhisperer CLI (Video - 3 minutes)

\- Bard FAQ (Reading - 10 minutes)

\- External Lab: Build a plot with Colab AI (Reading - 10 minutes)

\- External Lab: AWS Bedrock (Reading - 10 minutes)

\- AWS Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and Generative AI (Reading - 10 minutes)

\- People perspective: Culture and change towards AI/ML-first (Reading - 10 minutes)

\- External Lab: Use CodeWhisperer for Rust Calculator (Reading - 10 minutes)

\- Key LLMOps Technologies (Quiz - 30 minutes)

\- Final-Quiz (Quiz - 30 minutes)

Taught by

Noah Gift

Reviews

Start your review of Rust for Large Language Model Operations (LLMOps)

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.