What you'll learn:
- Learn Apache Beam - A portable programming model whose pipelines can be deployed on Spark, Flink, GCP (Google Cloud Dataflow) etc.
- Understand the working of each and every component of Apache Beam with HANDS-ON examples.
- Learn Apache Beam fundamentals including its Architecture, Programming model, Pcollections, Pipelines etc.
- Multiple PTransforms to Read, Transform and Write the processed data.
- Advance concepts of Windowing, Triggers, Watermarks, Late elements, Type Hints and many more.
- Load data to Google BigQuery Tables from Apache Beam pipeline.
- Build Real-Time business's Big data processing pipelines using Apache Beam.
- Data-sets and Beam codes used in lectures are available in resources tab.
Apache Beam is a unified and portable programming model for both Batch and Streaming data use cases.
Earlier we could run Spark, Flink & Cloud Dataflow Jobs only on their respective clusters. But now Apache Beam has come up with a portable programming model where we can build language agnostic Big data pipelines and run it using any Big data engine (Apache Spark, Flink or in Google Cloud Platform using its Cloud Dataflow and many more Big data engines).
Apache Beam is the future of building Big data processing pipelines and is going to be accepted by mass companies due to its portability. Many big companies have even started deploying Beam pipelines in their production servers.
What's included in the course ?
Complete Apache Beam concepts explainedfrom Scratch to Real-Time implementation.
Each and every Apache Beam concept is explained with proper HANDS-ON examples of it.
Include even those concepts, the explanation to which is not very clear anywhere online.
Type Hints, Encoding & Decoding, Watermarks, Windows, Triggers and many more.
Build 2 Real-time Big data case studies using Apache Beam programming model.
Load processed data to Google Cloud BigQuery Tables from Apache Beam pipeline via Dataflow.
Codes and Datasetsused in lectures areattached in the course for your convenience.