Data is constantly flowing into organizations from many sources. To derive insights and value from this data, it needs to go through an orchestrated pipeline of ingestion, storage, processing, and serving stages. This course will teach you how to build scalable, secure, and cost-effective batch data pipelines on AWS.
You will learn best practices for ingesting batch data from sources like databases and data lakes. The course explores services like AWS Glue and Amazon EMR for processing and transforming the raw data into analytics-ready datasets. The course covers data cataloging with the AWS Glue Data Catalog. You will also learn how to serve processed data for analysis, machine learning, and reporting using services like Amazon Athena and Amazon QuickSight.
Activities
This course includes interactive content, videos, knowledge checks, assessments, and hands-on labs.
Course objectives
In this course, you will learn to do the following:
- Describe the purpose, architecture, and processes of a batch data pipeline solution on AWS.
- Identify the appropriate AWS services and configurations for building a batch data pipeline solution.
- Explain the processes of data ingestion, processing, cataloging, and serving data for consumption in a batch data pipeline.
- Implement automation, orchestration, security, and governance options for a batch data pipeline solution.
- Monitor, optimize, and troubleshoot a batch data pipeline solution on AWS.
- Build and deploy a batch data pipeline solution using AWS services like Amazon EMR, AWS Glue, Amazon S3, and Amazon Athena. (Lab 1 and 2)
Intended audience
This course is intended for the following job roles:
- Data Engineers
- Data Scientists
- Data Analysts
- Business Intelligence Engineers
Prerequisites
We recommend that attendees of this course have the following:
- 2-3 years of experience in data engineering
- 1-2 years of hands-on experience with AWS services
- Completed AWS Cloud Practitioner Essentials
- Completed Fundamentals of Analytics on AWS - Parts 1 and 2
- Completed Data Engineering on AWS - Foundations
Course outline
Module 1 - Building a Batch Data Pipeline (35 min)
This section lays the foundation for building a batch data pipeline on AWS. It covers the key design considerations, data ingestion methods, and provides an assessment to evaluate your understanding of constructing a robust batch data pipeline solution.
- Lesson 1: Course Navigation
- Lesson 2: Introduction
- Lesson 3: Designing a Batch Data Pipeline
- Lesson 4: Ingesting Data
- Lesson 5: Assessment
- Lesson 6: Conclusion
- Lesson 7: Contact Us
Module 2 - Implementing the Batch Data Pipeline (30 min)
After designing the batch pipeline, this section dives into the implementation details. You'll learn how to process and transform data, catalog it for governance, and serve it for consumption by analytics tools. An assessment reinforces the concepts.
- Lesson 1: Course Navigation
- Lesson 2: Introduction
- Lesson 3: Processing and Transforming Data
- Lesson 4: Cataloging data
- Lesson 5: Serving Data for Consumption
- Lesson 6: Assessment
- Lesson 7: Conclusion
Module 3: A Day in the life of a Data Engineer (Lab) (60 min)
In this lab, you will use temperature and precipitation metrics to determine whether a company should stock summer or winter items for various cities. You'll create an AWS Glue crawler, review IAM policies, view the Data Catalog, run a Glue job to transform data, and query the processed data in Athena.
- Task 1: Create and run an AWS Glue crawler
- Task 2: Review the IAM policies
- Task 3: View the table in the Data Catalog
- Task 4: Run a job in AWS Glue Studio to transform the data
- Task 5: Query the data parquet table in Amazon Athena
Module 4 - Optimizing, Orchestrating, and Securing Batch Data Pipelines (40 min)
This section covers advanced topics to optimize your batch pipeline for cost and performance, orchestrate workflows across multiple AWS services, and implement security best practices and data governance.
- Lesson 1: Course Navigation
- Lesson 2: Introduction
- Lesson 3: Optimizing the Batch Data Pipeline
- Lesson 4: Orchestrating the Batch Data Pipeline
- Lesson 5: Securing and Governance of the Batch Data Pipeline
- Lesson 6: Assessment
- Lesson 7: Conclusion
Module 5: Orchestrate data processing in Spark using AWS Step Functions (Lab) (90 min)
Apply what you learned about orchestration by using Apache Spark and Step Functions to orchestrate a stock analysis workflow on Amazon EMR.
- Task 1: Explore the lab environment
- Task 2: Run the Step Functions state machine task
- Task 3: Validate the Step Functions run