Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

Efficient and Portable AI/LLM Inference on the Edge Cloud - Workshop

Linux Foundation via YouTube

Overview

Explore efficient and portable AI/LLM inference on the edge cloud in this 48-minute workshop presented by Xiaowei Hu from Second State. Learn about the challenges of running AI workloads on heterogeneous hardware and discover how WebAssembly (Wasm) offers a lightweight, fast, and portable solution. Gain hands-on experience creating and running Wasm-based AI applications on edge servers or local hosts. Examine practical examples using AI models and libraries for media processing (Mediapipe), computer vision (YOLO, Llava), and natural language processing (Llama2 series). Follow along with live demonstrations and run all examples on your own laptop during the session, gaining valuable insights into efficient AI deployment strategies for edge computing environments.

Syllabus

Workshop: Efficient and Portable AI / LLM Inference on the Edge Cloud - Xiaowei Hu, Second State

Taught by

Linux Foundation

Reviews

Start your review of Efficient and Portable AI/LLM Inference on the Edge Cloud - Workshop

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.