Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Leveraging Wasm for Portable AI Inference Across GPUs, CPUs, OS and Cloud-Native Environments

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Explore the advantages of using WebAssembly (Wasm) for AI inference tasks in cloud-native ecosystems in this 35-minute conference talk. Learn how Wasm enables developers to create applications on their personal computers that can be uniformly executed across various hardware platforms, including GPUs and CPUs, operating systems, and edge cloud environments. Discover how Wasm and Wasm runtime facilitate seamless integration into cloud-native frameworks, enhancing the deployment and scalability of AI applications. Gain insights into how Wasm provides a flexible and efficient solution for diverse cloud-native architectures, including Kubernetes, allowing developers to fully harness the potential of large language models, particularly open-source ones. Understand how leveraging Wasm's cross-platform capabilities can maximize the potential of AI applications, ensuring consistency, cost-effectiveness, and efficiency in AI inference across different computing environments.

Syllabus

Leveraging Wasm for Portable AI Inference Across GPUs, CPUs, OS & Cloud...- Miley Fu & Hung-Ying Tai

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of Leveraging Wasm for Portable AI Inference Across GPUs, CPUs, OS and Cloud-Native Environments

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.