What you'll learn:
- Apache Spark Foundation and Spark Architecture
- Data Engineering and Data Processing in Spark
- Working with Data Sources and Sinks
- Working with Data Frames and Spark SQL
- Using PyCharm IDE for Spark Development and Debugging
- Unit Testing, Managing Application Logs and Cluster Deployment
This course does not require any prior knowledge of Apache Spark or Hadoop. We have taken enough care to explain Spark Architecture and fundamental concepts to help you come up to speed and grasp the content of this course.
About the Course
I am creating PySpark - Apache Spark Programming in Python for beginners course to help you understand Spark programming and apply that knowledge to build data engineering solutions. This course is example-driven and follows a working session-like approach. We will be taking a live coding approach and explaining all the needed concepts along the way.
Who should take this Course?
I designed this course for software engineers willing to develop a Data Engineering pipeline and application using Apache Spark. I am also creating this course for data architects and data engineers who are responsible for designing and building the organization’s data-centric infrastructure. Another group of people is the managers and architects who do not directly work with Spark implementation. Still, they work with the people who implement Apache Spark at the ground level.
Spark Version used in the Course
This Course is using the Apache Spark 3.5. I have tested all the source code and examples used in this Course on Apache Spark 3.5 in the Databricks environment.