Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
In this course, you will learn about the Spark based Azure Databricks platform. You will see how Spark Structured Streaming processing model works, and then use it to build end-to-end production ready streaming pipeline on Azure Databricks platform.
Modern data pipelines often include streaming data, that needs to be processed in real-time. While Apache Spark is very popular for big data processing and can help us build reliable streaming pipelines, managing the Spark environment is no cakewalk. In this course, Conceptualizing the Processing Model for Azure Databricks Service, you will learn how to use Spark Structured Streaming on Databricks platform, which is running on Microsoft Azure, and leverage its features to build an end-to-end streaming pipeline quickly and reliably. And all this while learning about collaboration options and optimizations that it brings, but without worrying about the infrastructure management. First, you will learn about the processing model of Spark Structured Streaming, about the Databricks platform and features, and how it is runs on Microsoft Azure. Next, you will see how to setup the environment, like workspace, clusters, and security; configure streaming sources and sinks, and see how Structured Streaming fault tolerance works. Followed by this, you will learn how to build each phase of streaming pipeline, by extracting the data from source, transforming it, and loading it in a sink. And then make it production ready, and run it using Databricks jobs. You will also see, how to customize the cluster using Initialization scripts and Docker containers, to suit your business requirements. Finally, you will explore other aspects. You will see what are the different workloads available, and how pricing works. We will also talk about best practices, in terms of development, performance, stability and cost. And lastly, you will see how Spark Structured Streaming on Azure Databricks compares to other managed services, like Flink on AWS, Azure Stream Analytics, Beam on Google Cloud etc. By the end of this course, you will have the skills and knowledge of Azure Databricks platform needed to build an end-to-end streaming pipeline, using Spark Structured streaming.
Modern data pipelines often include streaming data, that needs to be processed in real-time. While Apache Spark is very popular for big data processing and can help us build reliable streaming pipelines, managing the Spark environment is no cakewalk. In this course, Conceptualizing the Processing Model for Azure Databricks Service, you will learn how to use Spark Structured Streaming on Databricks platform, which is running on Microsoft Azure, and leverage its features to build an end-to-end streaming pipeline quickly and reliably. And all this while learning about collaboration options and optimizations that it brings, but without worrying about the infrastructure management. First, you will learn about the processing model of Spark Structured Streaming, about the Databricks platform and features, and how it is runs on Microsoft Azure. Next, you will see how to setup the environment, like workspace, clusters, and security; configure streaming sources and sinks, and see how Structured Streaming fault tolerance works. Followed by this, you will learn how to build each phase of streaming pipeline, by extracting the data from source, transforming it, and loading it in a sink. And then make it production ready, and run it using Databricks jobs. You will also see, how to customize the cluster using Initialization scripts and Docker containers, to suit your business requirements. Finally, you will explore other aspects. You will see what are the different workloads available, and how pricing works. We will also talk about best practices, in terms of development, performance, stability and cost. And lastly, you will see how Spark Structured Streaming on Azure Databricks compares to other managed services, like Flink on AWS, Azure Stream Analytics, Beam on Google Cloud etc. By the end of this course, you will have the skills and knowledge of Azure Databricks platform needed to build an end-to-end streaming pipeline, using Spark Structured streaming.