Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Leveraging Wasm for Portable AI Inference Across GPUs, CPUs, OS and Cloud-Native Environments

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Explore the advantages of using WebAssembly (Wasm) for AI inference tasks in cloud-native ecosystems through this 25-minute conference talk. Discover how Wasm enables developers to create AI applications on their personal computers that can be uniformly executed across various hardware platforms, including GPUs, CPUs, operating systems, and edge cloud environments. Learn about Wasm's seamless integration with cloud-native frameworks, enhancing the deployment and scalability of AI applications. Gain insights into how Wasm provides a flexible and efficient solution for diverse cloud-native architectures, including Kubernetes, allowing developers to fully harness the potential of large language models (LLMs), particularly open-source ones. Tailored for cloud-native practitioners and AI developers, this talk offers valuable knowledge on maximizing AI application potential by leveraging Wasm's cross-platform capabilities, ensuring consistency, cost-effectiveness, and efficiency in AI inference across various computing environments.

Syllabus

Leveraging Wasm for Portable AI Inference Across GPUs, CPUs, OS & Cloud-Nativ... Miley Fu & Lucas Lu

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of Leveraging Wasm for Portable AI Inference Across GPUs, CPUs, OS and Cloud-Native Environments

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.