High-Performance Data Engineering with Kafka and Spark - Processing 1.2 Billion Records Per Hour
CodeWithYu via YouTube
Overview
Learn to build and optimize a high-performance data streaming pipeline capable of processing 1.2 billion records per hour in this comprehensive video tutorial. Master the integration of Apache Kafka for real-time data streaming, Apache Spark for rapid data processing, and monitoring tools like ELK Stack, Grafana, and Prometheus. Explore the complete system architecture, from whiteboard design to practical implementation, including detailed data storage estimations and clean architecture principles. Compare Python and Java Kafka producers for performance optimization, achieving rates of 300,000 records per second. Dive into Apache Spark consumer implementation, job optimization techniques, and cluster health management. Through hands-on coding demonstrations and architectural discussions, gain practical insights into building, monitoring, and scaling ultra-high-performance streaming platforms using industry-standard tools and best practices.
Syllabus
Introduction
High Level Architecture Whiteboard
Data Storage Estimation with workings!
Clean Architecture
System Architecture
System Architecture Setup and Coding
Python Producer
Java Producer yay!
300,000 records per second!
Apache Spark Consumer
Spark Job Optimisation and Statistics
Cluster Health issues
Part 1 Outro
Taught by
CodeWithYu