Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

A Chiplet-Based Generative Inference Architecture with Block Floating Point Datatypes

Scalable Parallel Computing Lab, SPCL @ ETH Zurich via YouTube

Overview

Explore a comprehensive conference talk on chiplet-based generative inference architecture and block floating point datatypes for AI acceleration. Delve into modular, spatial CGRA-like architectures optimized for generative inference, and learn about deep RL-based mappers in compilers for spatial and temporal architectures. Discover weight and activation quantization techniques in block floating point formats, building upon GPTQ and SmoothQuant, and their implementation in PyTorch. Examine an extension to EL-attention for reducing KV cache size and bandwidth. Gain insights from speaker Sudeep Bhoja in this SPCL_Bcast #38 recording from ETH Zurich's Scalable Parallel Computing Lab, featuring an in-depth presentation followed by announcements and a Q&A session.

Syllabus

Introduction
Talk
Announcements
Q&A Session

Taught by

Scalable Parallel Computing Lab, SPCL @ ETH Zurich

Reviews

Start your review of A Chiplet-Based Generative Inference Architecture with Block Floating Point Datatypes

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.