Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

The Ultimate Hands-On Hadoop

Packt via Coursera

Overview

Immerse yourself in the comprehensive world of Hadoop with this expertly designed course. Starting with the basics, you'll learn to install the Hortonworks Data Platform Sandbox on your local machine, providing you with a powerful environment to explore Hadoop's core functionalities. The course meticulously guides you through essential concepts such as the Hadoop Distributed File System (HDFS) and MapReduce, offering practical exercises to solidify your understanding. As you progress, you'll delve into advanced Hadoop programming with tools like Pig, Hive, and Spark. These modules are designed to give you hands-on experience with real-world datasets, allowing you to build complex queries, analyze large datasets, and even venture into machine learning with Spark's MLLib. The course also covers integrating relational and non-relational databases with Hadoop, ensuring you can handle a wide range of data scenarios in your career. The final sections focus on managing and optimizing your Hadoop cluster, introducing you to tools like YARN, ZooKeeper, Oozie, and Kafka. You’ll learn how to feed data into your cluster efficiently, manage resources, and analyze streaming data in real time. By the end of this course, you’ll be well-equipped to design and implement Hadoop-based solutions in any data-driven environment. This course is ideal for data engineers, software developers, and IT professionals who have a basic understanding of programming and data management. Familiarity with Java, SQL, and Linux command-line interfaces is recommended but not required.

Syllabus

  • Learning All the Buzzwords and Installing the Hortonworks Data Platform Sandbox
    • In this module, we will dive into the world of Hadoop, starting with its installation and setup using the Hortonworks Data Platform Sandbox. You'll explore the key buzzwords and technologies that make up the Hadoop ecosystem, learn about the historical context and impact of the Hortonworks and Cloudera merger, and begin working with real data to get a feel for Hadoop's capabilities.
  • Using the Hadoop's Core: Hadoop Distributed File System (HDFS) and MapReduce
    • In this module, we will explore the core components of Hadoop: the Hadoop Distributed File System (HDFS) and MapReduce. You'll learn how HDFS reliably stores massive data sets across a cluster and how MapReduce enables distributed data processing. Through hands-on activities, you'll import datasets, set up a MapReduce environment, and write scripts to analyze data, including breaking down movie ratings and ranking movies by popularity.
  • Programming Hadoop with Pig
    • In this module, we will delve into Pig, a high-level scripting language that simplifies Hadoop programming. You'll start by exploring the Ambari web-based UI, which makes working with Pig more accessible. The module includes practical examples and activities, such as finding the oldest five-star movies and identifying the most-rated one-star movies using Pig scripts. You'll also learn about the capabilities of Pig Latin and test your skills through challenges and result comparisons.
  • Programming Hadoop with Spark
    • In this module, we will explore the power of Apache Spark, a key technology in the Hadoop ecosystem known for its speed and versatility. You’ll start by understanding why Spark is a game-changer in big data. The module will cover Resilient Distributed Datasets (RDDs) and Datasets, showing you how to use them to analyze movie ratings data. You'll also delve into Spark's machine learning library (MLLib) to create a movie recommendation system. Through hands-on activities, you'll practice writing Spark scripts and refining your data analysis skills.
  • Using Relational Datastores with Hadoop
    • In this module, we will explore the integration of relational datastores with Hadoop, focusing on Apache Hive and MySQL. You'll start by learning how Hive enables SQL queries on data within HDFS, followed by hands-on activities to find popular and highly-rated movies using Hive. The module also covers the installation and integration of MySQL with Hadoop, using Sqoop to seamlessly transfer data between MySQL and Hadoop's HDFS/Hive. Through practical exercises, you'll gain proficiency in managing and querying relational data within the Hadoop ecosystem.
  • Using Non-Relational Data Stores with Hadoop
    • In this module, we will explore the use of non-relational (NoSQL) data stores within the Hadoop ecosystem. You'll learn why NoSQL databases are crucial for scalability and efficiency, and dive into specific technologies like HBase, Cassandra, and MongoDB. Through a series of activities, you'll practice importing data into HBase, integrating it with Pig, and using Cassandra and MongoDB alongside Spark. The module concludes with exercises to help you choose the most suitable NoSQL database for different scenarios, empowering you to make informed decisions in big data management.
  • Querying Data Interactively
    • In this module, we will focus on interactive querying tools that allow you to quickly access and analyze big data across multiple sources. You'll explore technologies like Drill, Phoenix, and Presto, learning how each one solves specific challenges in querying large datasets. The module includes hands-on activities where you'll set up these tools, execute queries that span across databases such as MongoDB, Hive, HBase, and Cassandra, and integrate these tools with other Hadoop ecosystem components. By the end of this module, you'll be equipped to perform efficient, real-time data analysis across varied data stores.
  • Managing Your Cluster
    • In this module, we will explore the critical components involved in managing a Hadoop cluster. You'll learn about YARN's resource management capabilities, how Tez optimizes task execution using Directed Acyclic Graphs, and the differences between Mesos and YARN. We'll dive into ZooKeeper for maintaining reliable operations and Oozie for orchestrating complex workflows. Hands-on activities will guide you through setting up and using Zeppelin for interactive data analysis and using Hue for a more user-friendly interface. The module also touches on other noteworthy technologies like Chukwa and Ganglia, providing a comprehensive understanding of cluster management in Hadoop.
  • Feeding Data to Your Cluster
    • In this module, we will explore the essential tools for feeding data into your Hadoop cluster, focusing on Kafka and Flume. You'll learn how Kafka supports scalable and reliable data collection across a cluster and how to set it up to publish and consume data. Additionally, you'll discover how Flume's architecture differs from Kafka and how to use it for real-time data ingestion. Through hands-on activities, you'll configure Kafka to monitor Apache logs and Flume to watch directories, publishing incoming data into HDFS. These skills will help you manage and process streaming data effectively in your Hadoop environment.
  • Analyzing Streams of Data
    • In this module, we will focus on analyzing streams of data using real-time processing frameworks such as Spark Streaming, Apache Storm, and Flink. You’ll start by learning how Spark Streaming processes micro-batches of data in real-time and participate in activities that include analyzing web logs streamed by Flume. The module then introduces Apache Storm and Flink, providing hands-on exercises to implement word count applications with these tools. By the end of this module, you will be able to build continuous applications that efficiently process and analyze streaming data.
  • Designing Real-World Systems
    • In this module, we will focus on designing and implementing real-world systems using a combination of Hadoop ecosystem tools. You'll start by exploring additional technologies like Impala, NiFi, and AWS Kinesis, learning how they fit into broader Hadoop-based solutions. The module then guides you through the process of understanding system requirements and designing applications that consume and analyze large-scale data, such as web server logs or movie recommendations. By the end of this module, you’ll be equipped to design and build complex, efficient, and scalable data systems tailored to specific business needs.
  • Learning More
    • In this final module, we will provide you with a selection of books, online resources, and tools recommended by the author to further your knowledge of Hadoop and related technologies. This module serves as a guide for continued learning, offering you the means to stay updated with the latest developments in the Hadoop ecosystem and expand your skills beyond this course.

Taught by

Packt

Reviews

Start your review of The Ultimate Hands-On Hadoop

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.