What you'll learn:
- Understand how to use Databricks Lakehouse Platform and its tools
- Build ETL pipelines using Apache Spark SQL and Python
- Process data incrementally in batch and streaming mode
- Orchestrate production pipelines
- Understand and follow best security practices in Databricks
If you are interested in becoming a Certified Data Engineer Associate from Databricks, you have come to the right place! This study guide will help you with preparing for this certification exam.
By the end of this course, you should be able to:
Understand how to use and the benefits of using the Databricks Lakehouse Platform and its tools, including:
Data Lakehouse (architecture, descriptions, benefits)
Data Science and Engineering workspace (clusters, notebooks, data storage)
Delta Lake (general concepts, table management and manipulation, optimizations)
Build ETL pipelines using Apache Spark SQL and Python, including:
Relational entities (databases, tables, views)
ELT (creating tables, writing data to tables, cleaning data, combining and reshaping tables, SQL UDFs)
Python (facilitating Spark SQL with string manipulation and control flow, passing data between PySpark and Spark SQL)
Incrementally process data, including:
Structured Streaming (general concepts, triggers, watermarks)
Auto Loader (streaming reads)
Multi-hop Architecture (bronze-silver-gold, streaming applications)
Delta Live Tables (benefits and features)
Build production pipelines for data engineering applications and Databricks SQL queries and dashboards, including:
Jobs (scheduling, task orchestration, UI)
Dashboards (endpoints, scheduling, alerting, refreshing)
Understand and follow best security practices, including:
Unity Catalog (benefits and features)
Entity Permissions (data objects Privileges)
With the knowledge you gain during this course, you will be ready to take the certification exam.
I am looking forward to meeting you!