Learn the Apache Kafka basics. Learn how to use this enterprise-scale data streaming technology to read and write data from multiple sources—without managing multiple integrations.
Overview
Syllabus
Introduction
- Kafka course introduction
- Apache Kafka in five minutes
- Course objectives
- Topics, partitions, and offsets
- Producers and message keys
- Consumers and deserialization
- Consumer groups and consumer offsets
- Brokers and topics
- Topic replication
- Producer acknowledgments and topic durability
- ZooKeeper
- Kafka KRaft: Removing ZooKeeper
- Theory roundup
- Important: Starting Kafka and lectures order
- Starting Kafka with Conduktor: Multi-platform
- macOS X: Download and set up Kafka in PATH
- macOS X: Start ZooKeeper and Kafka
- macOS X: Using brew
- Linux: Download and set up Kafka in PATH
- Linux: Start ZooKeeper and Kafka
- Windows WSL2: Download Kafka and PATH setup
- Windows WSL2: Start ZooKeeper and Kafka
- Windows WSL2: How to fix problems
- Windows non-WSL2: Start ZooKeeper and Kafka
- macOS X: Start Kafka in KRaft mode
- Linux: Start Kafka in KRaft mode
- Windows WSL2: Start Kafka in KRaft mode
- CLI introduction
- Kafka topics CLI
- Kafka console producer CLI
- Kafka console consumer CLI
- Kafka consumers in groups
- Kafka consumer groups CLI
- Resetting offsets
- Conduktor: Demo
- Kafka SDK list
- Creating a Kafka project
- Java producer
- Java producer callbacks
- Java producer with keys
- Java consumer
- Java consumer: Graceful shutdown
- Java consumer inside consumer group
- Java consumer incremental cooperative rebalance and static group membership
- Java consumer incremental cooperative rebalance: Practice
- Java consumer auto offset commit behavior
- Programming: Advanced tutorials
- Real-world project overview
- Wikimedia producer project setup
- Wikimedia producer implementation
- Wikimedia producer run
- Wikimedia producer: Producer config intros
- Producer acknowledgments deep dive
- Producer retries
- Idempotent producer
- Safe Kafka producer settings
- Wikimedia producer safe producer implementation
- Kafka message compression
- linger.ms and batch.size producer settings
- Wikimedia producer high-throughput implementation
- Producer default partitioner and sticky partitioner
- [Advanced] max.block.ms and buffer.memory
- OpenSearch consumer: Project overview
- OpenSearch consumer: Project setup
- Setting up OpenSearch on Docker
- Setting up OpenSearch on the Cloud
- OpenSearch 101
- OpenSearch consumer implementation: Part 1
- OpenSearch consumer implementation: Part 2
- Consumer delivery semantics
- OpenSearch consumer implementation: Part 3 idempotence
- Consumer offsets commit strategies
- OpenSearch consumer implementation: Part 4 delivery semantics
- OpenSearch consumer implementation: Part 5 batching data
- Consumer offset reset behavior
- OpenSearch consumer implementation: Part 6 replaying data
- Consumer internal threads
- Consumer replica fetching: Rack awareness
- Kafka extended APIs: Overview
- Kafka Connect introduction
- Kafka Connect Wikimedia and Elasticsearch hands-on
- Kafka Streams introduction
- Kafka Streams hands-on
- Kafka Schema Registry introduction
- Kafka Schema Registry hands-on
- Which Kafka API should you use?
- Choosing partition count and replication factor
- Kafka topics naming convention
- Case study: MovieFlix
- Case study: GetTaxi
- Case study: MySocialMedia
- Case study: MyBank
- Case study: Big data ingestion
- Case study: Logging and metrics aggregation
- Kafka cluster setup high-level architecture overview
- Kafka monitoring and operations
- Kafka security
- Kafka multi-cluster and MirrorMaker
- Advertised listeners: Kafka client and server communication protocol
- Changing a topic configuration
- Segment and indexes
- Log cleanup policies
- Log cleanup delete
- Log compaction theory
- Log compaction practice
- Unclean leader election
- Large messages in Kafka
- What's next?
Taught by
Stephane Maarek