Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Accelerate AI Inference Workloads with Google Cloud TPUs and GPUs

Google Cloud Tech via YouTube

Overview

Explore key considerations for choosing cloud tensor processing units (TPUs) and NVidia-powered graphics processing unit (GPU) VMs for high-performance AI inference on Google Cloud. Learn about the strengths of each accelerator for various workloads, including large language models and generative AI. Discover deployment and optimization techniques for inference pipelines using TPUs or GPUs. Understand cost implications and explore strategies for cost optimization. This 37-minute conference talk from Google Cloud Next 2024 features insights from speakers Alexander Spiridonov, Omer Hasan, UÄŸur Arpaci, and Kirat Pandya, providing valuable guidance for deploying AI models at scale with Google Cloud's range of accelerator options.

Syllabus

Accelerate AI inference workloads with Google Cloud TPUs and GPUs

Taught by

Google Cloud Tech

Reviews

Start your review of Accelerate AI Inference Workloads with Google Cloud TPUs and GPUs

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.