Connect Hazelcast to Kafka Clusters Secured with Kerberos

Learn how to connect Hazelcast Jet pipelines to Kafka clusters that are secured with Kerberos authentication.


When Kafka brokers are secured with Kerberos authentication, your Hazelcast cluster must acquire session keys from the Kerberos server before the Hazelcast cluster can communicate with the Kafka brokers.

In this example, you’ll learn how to configure the Hazelcast Kafka connector to connect to a Kafka broker that’s secured with Kerberos authentication.

Before you Begin

Before starting this tutorial, make sure that you have the following prerequisites:

Step 1. Clone the Project

To set up the project, you need to download the code from GitHub.

Clone the GitHub repository.


  • SSH

git clone
cd kafka-kerberos
git clone
cd kafka-kerberos

Step 2. Start the Docker Containers

In this step, you’ll use Docker Compose to start all the Docker containers, including a Kafka broker, Kerberos server, Hazelcast Platform, and Management Center.

docker compose up -d

The Docker containers are running in detatched mode. You can see that they are running, using the following command:

docker ps

To see the logs of your Hazelcast member, use the following command:

docker logs hazelcast

You should see that you have a single member running in the cluster.

Members {size:1, ver:1} [
	Member []:5701 - 15116025-342b-43c0-83f7-a2a90f0281ce this

Step 3. Create a Kafka Topic

To create a Kafka topic, you’ll use the kafka-console-producer script that’s built into the Kafka broker.

Create the orders topic and add some records to it.

docker exec -i broker kafka-console-producer --broker-list broker:9092 --topic orders < orders.jsonl --producer.config /etc/kafka/

The file contains the Kafka client configuration that allows Kerberos to authenticate Kafka clients. The Kafka clients assume the jduke@KERBEROS.EXAMPLE SPN (service principal name), which is registered on the Kerberos server.

Step 4. Connect Hazelcast to the Kafka Broker

Now that your Kafka topic has some records, you can configure the Kafka connector in Hazelcast to consume those records. In this step, you’ll create a Jet job that reads from the Kafka topic and writes the data to a Hazelcast map called sink_orders. This Kafka connector is configured to assume the same SPN as the producer in the previous step.

  1. Change into the jet-pipeline directory.

  2. Package the Java file into a JAR.

    mvn package
  3. Submit the JAR to your Hazelcast member. Replace the $PATH_TO_PROJECT placeholder with the absolute path to the kafka-kerberos directory.

    docker run -it --network kafka-kerberos_default -v $PATH_TO_PROJECT/jet-pipeline/target:/jars --rm hazelcast/hazelcast:5.1.4  hz-cli -t hazelcast:5701 submit -c com.example.hazelcast.jet.kafka.KafkaSourceWithClientServerHazelcast /jars/jet-kafka-1.0.0.jar

You should see the following in the console:

Submitting JAR '/jars/jet-kafka-1.0.0.jar' with arguments []
Orders added to map

Your Hazelcast cluster connects to the Kerberos server, acquires a session key and reads records from the orders topic. Your Hazelcast cluster now contains a map called sink_orders that contains the orders.

Step 5. Verify that the Connection Succeeded

In this step, you’ll verify that the Kerberos server authenticated Hazelcast and the Kafka broker allowed the Hazelcast Kafka connector to read from the orders topic.

  1. Go to localhost:8080 and enable dev mode in Management Center.

  2. Open the SQL Browser.

  3. In the select a map dropdown, select sink_orders (No SQL Mapping) to auto-generate the CREATE MAPPING command.

  4. Click Execute Query

  5. Delete the previous command from the SQL editor and enter the following:

    SELECT * FROM sink_orders;
  6. Click Execute Query.

The sink_orders map contains all the records in the orders topic.

Kafka and Kerberos: