Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Optimizing LLM Inference with AWS Trainium, Ray, vLLM, and Anyscale

Anyscale via YouTube

Overview

Discover how to optimize large language model (LLM) inference using AWS Trainium, Ray, vLLM, and Anyscale in this 46-minute webinar. Learn to scale and productionize LLM workloads cost-effectively by leveraging AWS accelerator instances, including AWS Inferentia, for reliable LLM serving at scale. Explore building a complete LLM inference stack using vLLM and Ray on Amazon EKS, and understand Anyscale's performance and enterprise capabilities for ambitious LLM and GenAI inference workloads. Gain insights into using AWS Inferentia accelerators for leading price-performance, leveraging AWS compute instances on Anyscale for optimized LLM inference, and utilizing Anyscale's managed enterprise LLM inference offering with advanced cluster management optimizations. Ideal for AI Engineers seeking to operationalize generative AI models at scale cost-efficiently and Infrastructure Engineers planning to support GenAI use cases and LLM inference in their organizations.

Syllabus

Optimizing LLM Inference with AWS Trainium, Ray, vLLM, and Anyscale

Taught by

Anyscale

Reviews

Start your review of Optimizing LLM Inference with AWS Trainium, Ray, vLLM, and Anyscale

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.