Overview
Explore the integration of Apache Toree, YuniKorn, Spark, and Airflow for creating efficient, scalable data pipelines in this 24-minute conference talk. Discover how Apache Toree provides an interactive analysis environment with Spark via Jupyter Notebook, and learn about using Apache YuniKorn to manage and schedule computational resources for improved system efficiency. Delve into the role of Apache Spark in large-scale data processing, focusing on its integration with Toree and YuniKorn. Examine how Apache Airflow orchestrates complex workflows, manages dependencies, and provides end-to-end processing solutions. Gain insights from AI Platform Architect Luciano Resende and Software Engineer Hongyue Zhang on leveraging these Apache projects for optimized data processing, drawing from their extensive experience in open source, enterprise-grade AI platforms, and data science technologies.
Syllabus
Orchestrating Scalable Data Pipelines with Apache Toree, YuniKorn, Spark, and Airflow
Taught by
The ASF