Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Distributed Multi-GPU Computing with Dask, CuPy and RAPIDS

EuroPython Conference via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore distributed multi-GPU computing using Dask, CuPy, and RAPIDS in this EuroPython 2019 conference talk. Discover how recent developments in NumPy community standards and protocols have simplified the integration of distributed and GPU computing libraries. Learn about GPU-accelerated clustering, the RAPIDS ecosystem for end-to-end GPU-accelerated data science, and the benefits of the Apache Arrow format. Dive into practical examples of distributed computing with Dask, including SVD benchmarks and scaling up with RAPIDS. Gain insights into the challenges of communication in distributed systems and explore the roadmap towards version 1.0 of these technologies. Enhance your understanding of high-performance computing techniques for data science applications.

Syllabus

Intro
GPU-Accelerated Clustering Code Example
What is RAPIDS? New GPU Accelerated Data Science Pipeline
RAPIDS End-to-End GPU-Accelerated Data Science
Learning from Apache Arrow
Data Science Workflow with RAPIDS
Ecosystem Partners
ML Technology Stack
Distributing Dask
Dask SVD Example
Numpy Array Function (NEP-18)
Python CUDA Array Interface
Interoperability for the Win
Challenges: Communication
SVD Benchmark
Scale up with RAPIDS
Road to 1.0
Additional Reading Material

Taught by

EuroPython Conference

Reviews

Start your review of Distributed Multi-GPU Computing with Dask, CuPy and RAPIDS

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.