Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

Running LLMs in the Cloud - Best Practices and Deployment Approaches

Linux Foundation via YouTube

Overview

Explore the growing demand for running Large Language Models (LLMs) in cloud environments and learn best practices for deploying them in cloud-native settings. Discover three key approaches for LLM deployment: Python-based solutions, native runtimes like llama.cpp or vLLM, and WebAssembly as an abstraction layer. Examine the benefits and challenges of each method, focusing on real-world applications, integration ease, portability, and resource efficiency. Gain insights into the CNCF CNAI ecosystem landscape and receive practical advice for selecting and implementing the most suitable strategy for your specific needs. Demystify cloud-native AI and obtain a clear roadmap for deploying LLMs in the cloud, understanding the strengths and trade-offs of different approaches to make informed decisions for your unique requirements.

Syllabus

Keynote: Running LLMs in the Cloud | 主旨演讲:在云上运行大语言模型 - Miley Fu, Developer Advocate, Second State

Taught by

Linux Foundation

Reviews

Start your review of Running LLMs in the Cloud - Best Practices and Deployment Approaches

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.