Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Stanford Seminar - Dataflow for Convergence of AI and HPC - GroqChip

Stanford University via YouTube

Overview

Explore the convergence of AI and High-Performance Computing (HPC) through the lens of dataflow architecture in this Stanford seminar. Delve into the novel Groq architecture and tensor streaming processor (TSP), combining traditional dataflow elements with a powerful stream programming model. Discover how over 400,000 arithmetic units and a SIMD spatial microarchitecture efficiently exploit dataflow locality in deep learning models. Learn about the stream programming model, where on-chip functional units consume and produce tensor inputs, chaining outputs to minimize memory access. Understand how deterministic execution and dataflow locality simplify compiler abstraction, enabling efficient orchestration of data and instructions. Examine the extension of this model to distributed scale-out systems, creating a synchronous parallel computer illusion. Explore how the Groq parallelizing compiler leverages this programming model to auto-scale TSPs, facilitating robust numerical computations in production environments.

Syllabus

Introduction
Dennis Axe
Hardware Software Interface
Pipeline
Core Architecture
Superlane Architecture
DomainSpecific Architecture
Data Types
Communication and Computation
Energy Difference
Functional Control Units
Superlane
Vector Processor
Memory System
Switch Execution Module
System Architecture
Topology
Packaging
Network
Normal RDMA
Communication model
Synchronous communication

Taught by

Stanford Online

Reviews

Start your review of Stanford Seminar - Dataflow for Convergence of AI and HPC - GroqChip

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.