What you'll learn:
- Acquiring the necessary skills to qualify for an entry-level Data Engineering position
- Developing a practical comprehension of Data Lakehouse concepts through hands-on experience
- Learning to operate a Delta table by accessing its version history, recovering data, and utilizing time travel functionality
- Optimizing a delta table with various techniques like caching, partitioning, and z-ordering for faster analytics
- Obtaining practical knowledge in constructing a data pipeline through the usage of Apache Spark on the Databricks platform
- Doin analytics within a Databricks AWS Account
Data Engineering is a vital component of modern data-driven businesses. The ability to process, manage, and analyze large-scale data sets is a core requirement for organizations that want to stay competitive. In this course, you will learn how to build a data pipeline using Apache Spark on Databricks' Lakehouse architecture. This will give you practical experience in working with Spark and Lakehouse concepts, as well as the skills needed to excel as a Data Engineer in a real-world environment.
Throughout the Course, You Will Learn:
Conducting analytics using Python and Scala with Spark.
Applying Spark SQL and Databricks SQL for analytics.
Developing a data pipeline with Apache Spark.
Becoming proficient in Databricks' community edition.
Managing a Delta table by accessing version history, restoring data, and utilizing time travel features.
Optimizing query performance using Delta Cache.
Working with Delta Tables and Databricks File System.
Gaining insights into real-world scenarios from experienced instructors.
Course Structure:
Beginning with familiarizing yourself with Databricks' community edition and creating a basic pipeline using Spark.
Progressing to more complex topics after gaining comfort with the platform.
Learning analytics with Spark using Python and Scala, including Spark transformations, actions, joins, Spark SQL, and DataFrame APIs.
Acquiring the knowledge and skills to operate a Delta table, including accessing its version history, restoring data, and utilizing time travel functionality using Spark and Databricks SQL.
Understanding how to use Delta Cache to optimize query performance.
Optional Lectures on AWS Integration:
'Setting up Databricks Account on AWS' and 'Running Notebooks Within a Databricks AWS Account.'
Building an ETL pipeline with Delta Live Tables
Providing additional opportunities to explore Databricks within the AWS ecosystem.
This course is designed for Data Engineering beginners with no prior knowledge of Python and Scala required. However, some familiarity with databases and SQL is necessary to succeed in this course. Upon completion, you will have the skills and knowledge required to succeed in a real-world Data Engineer role.
Throughout the course, you will work with hands-on examples and real-world scenarios to apply the concepts you learn. By the end of the course, you will have the practical experience and skills required to understand Spark and Lakehouse concepts, and to build a scalable and reliable data pipeline using Spark on Databricks' Lakehouse architecture.