How to start Apache Kafka

Apache Kafka is an open-source for distributed streaming system. We can use it as a Messaging System, a Storage System or Stream Processing. So in the tutorial, JavaSampleApproach will show you the first step to quick start with Apache Kafka.

Related Articles:
How to start Spring Kafka Application with Spring Boot
How to start Spring Apache Kafka Application with SpringBoot Auto-Configuration

1. Download Apache Kafka

Go to the download page, download file Scala 2.12 – kafka_2.12-0.10.2.1.tgz (asc, md5)

Apache kafka Quick start- download source

We will get a .tar file, then un-tar it, we have folder kafka_2.12-0.10.2.1:

Apache kafka folder

cd to .\kafka_2.12-0.10.2.1\bin:

Apache kafka Quick start- unix bin

All files .sh under folder .\kafka_2.12-0.10.2.1\bin is used to setup Apache Kafka in Unix-based environment.

cd to .\kafka_2.12-0.10.2.1\bin\windows:

Apache kafka Quick start- windows

All files .bat under folder .\kafka_2.12-0.10.2.1\bin\windows is used to setup Apache Kafka in Windows platforms.

2. Start a Kafka server

Apache Kafka uses ZooKeeper as a centralized service, so we need to start a ZooKeeper.

-> Open a cmd, cd to .\kafka_2.12-0.10.2.1:

– Unix-based

– Windows

Start the Apache Kafka server.

-> Open a new cmd, cd to .\kafka_2.12-0.10.2.1:

– Unix-based

– Windows

3. Create a Kafka Topic

Setup a topic with name jsa-test, that has only one partition & one replica.

-> Open a new cmd and cd to .\kafka_2.12-0.10.2.1:

– Unix-based

– Windows

List out the topics:

– Unix-based

– Windows

4. Use Kafka Producer to send messages

Use Kafka command line client to send messages to Kafka topic.

– Unix-based

– Windows

5. Start a Kafka consumer

Use Kafka command line consumer to print out messages on standard output.

– Unix-based

– Windows

-> Results:

6. Create a multi-broker cluster

Up to now, we had setup the Kafka cluster with single node. Now we go to next step: setup 2 new nodes for the Kafka cluster.

6.1 Create a config file for new brokers

– Unix-based

Edit new files as below:

– Windows

Edit new files {server-1.properties, server-2.properties} as below:

6.2 Start new nodes

– Unix-based

– Windows

6.3 Create a new topic for replication

– Unix-based

– Windows

Check the replicated-topic by describe topics command:

– Unix-based

– Windows

-> Results:

The first line gives a summary of all the partitions. Each additional line gives information about one partition.

leader is the node responsible for all reads and writes for the given partition.
replicas is the list of nodes that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
isr is the set of “in-sync” replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.

6.4 Create a Producer and Consumer to replicated-topic

– Unix-based

– Window

6.5 Fault-Tolerance

See again the topic description:

Broker 2 is the leader. -> We will kill it by commandline:

– Unix-based

– Windows

Or We can use Ctr + C on cmd of Broker 2 to kill it.

See shutdown logs:

Again check description of replicated-topic:

-> Now the Leader is Broker 0. Broker 2 had been shutdown so in-sync list, so We just have 2 brokers {0, 1}.

Make a consumer for checking the available messages:

-> Kafka Fault-Tolerance works right! Now you can start development with Apache Kafka!

By JavaSampleApproach | June 6, 2017.


Related Posts


Got Something To Say:

Your email address will not be published. Required fields are marked *

*