A newer version of Hazelcast Operator is available.

View latest

Deploy a Cluster with the Hazelcast Platform Operator for Kubernetes

In this tutorial, you’ll deploy a Hazelcast cluster and an instance of Management Center, using Hazelcast Platform Operator for Kubernetes.

Before you Begin

You need a Kubernetes or Openshift cluster, and the kubectl or oc command-line tool must be configured to communicate with your cluster.

Step 1. Deploy Hazelcast Platform Operator

From release 5.6.0 onwards, you can use a Helm chart to install the Hazelcast Platform Operator.

  1. Add the Hazelcast Helm Charts repository to your Helm repository list by running the following command:

    helm repo add hazelcast https://hazelcast-charts.s3.amazonaws.com/
    helm repo update
  2. You can either deploy the Hazelcast Platform Operator at the same time as CRDs or separately.

    Since CRDs are global resources, they may need to be installed by an administrator.
    • Cluster Wide Installation

    • Restricted Installation

    Run the following command to deploy the Operator and the CRDs together. By default, the Hazelcast Platform Operator watches all namespaces. Use the watchedNamespaces variable to update this configuration.

    helm install operator hazelcast/hazelcast-platform-operator --version=5.7.0 \
        --set=installCRDs=true

    Run the following commands to deploy the Operator and the CRDs separately. An administrator may need to do this.

    helm install operator-crds hazelcast/hazelcast-platform-operator-crds --version=5.7.0

    After installing CRDs, install the Operator by running the following command. This operation requires only namespace-scoped permissions for hz-system, ns-1 and ns-2 namespaces.

    helm install operator hazelcast/hazelcast-platform-operator --version=5.7.0 -n hz-system
        --set=createClusterScopedResources=false \
        --set=webhook.enabled=false \
        --set=enableHazelcastNodeDiscovery=false \
        --set=installCRDs=false \
        --set=watchedNamespaces="{ns-1, ns-2}"
    You can view all configuration options by running the following command: helm show values hazelcast/hazelcast-platform-operator
  3. Monitor the operator logs. At this point, the Hazelcast Platform Operator should be up and running. You can check it with the command below.

    • Kubernetes

    • Openshift

    kubectl logs deployment.apps/operator-hazelcast-platform-operator
    oc logs deployment.apps/operator-hazelcast-platform-operator

Step 2. Start the Hazelcast Cluster

After installing and running the Hazelcast Platform Operator, you can create a Hazelcast cluster. First, create the Hazelcast custom resource file as hazelcast.yaml.

  • Open Source

  • Enterprise

  1. Create the Hazelcast custom resource file and name it hazelcast.yaml.

    apiVersion: hazelcast.com/v1alpha1
    kind: Hazelcast
    metadata:
      name: hazelcast-sample
    spec:
      clusterSize: 3
      repository: 'docker.io/hazelcast/hazelcast'
      version: '5.2.3-slim'
  2. Apply the custom resource to start the Hazelcast cluster.

    For Kubernetes
    kubectl apply -f hazelcast.yaml
    For Openshift
    oc apply -f hazelcast.yaml
  3. Verify that the cluster is up and running by checking the Hazelcast member logs.

    For Kubernetes
    kubectl logs pod/hazelcast-sample-0
    For Openshift
    oc logs pod/hazelcast-sample-0

You should see the following:

Members {size:3, ver:3} [
        Member [10.36.8.3]:5701 - ccf31703-de3b-4094-9faf-7b5d0dc145b2 this
        Member [10.36.7.2]:5701 - e75bd6e2-de4b-4360-8113-040773d858b7
        Member [10.36.6.2]:5701 - c3d105d2-0bca-4a66-8519-1cacffc05c98
]

Hazelcast Enterprise requires a license key. If you don’t have a license key, you can request one from the Hazelcast website.

  1. Create a Kubernetes secret to hold your license key.

    For Kubernetes
    kubectl create secret generic hazelcast-license-key --from-literal=license-key=<YOUR LICENSE KEY>
    For Openshift
    oc create secret generic hazelcast-license-key --from-literal=license-key=<YOUR LICENSE KEY>
  2. Create the Hazelcast custom resource file and name it hazelcast-enterprise.yaml.

    apiVersion: hazelcast.com/v1alpha1
    kind: Hazelcast
    metadata:
      name: hazelcast-sample
    spec:
      clusterSize: 3
      repository: 'docker.io/hazelcast/hazelcast-enterprise'
      version: '5.2.3-slim'
      licenseKeySecret: hazelcast-license-key
  3. Apply the custom resource to start the Hazelcast cluster.

    For Kubernetes
    kubectl apply -f hazelcast-enterprise.yaml
    For Openshift
    oc apply -f hazelcast-enterprise.yaml
  4. Verify that Hazelcast cluster is up and running by checking the Hazelcast member logs.

    For Kubernetes
    kubectl logs pod/hazelcast-sample-0
    For Openshift
    oc logs pod/hazelcast-sample-0

    You should see the following:

Members {size:3, ver:3} [
        Member [10.36.8.3]:5701 - ccf31703-de3b-4094-9faf-7b5d0dc145b2 this
        Member [10.36.7.2]:5701 - e75bd6e2-de4b-4360-8113-040773d858b7
        Member [10.36.6.2]:5701 - c3d105d2-0bca-4a66-8519-1cacffc05c98
]

Step 3. Check that the Hazelcast Cluster is Running

To check if a cluster is running, see the status field of the Hazelcast resource.

The status can be checked using the get hazelcast command.

  • Kubernetes

  • Openshift

kubectl get hazelcast
oc get hazelcast
NAME               STATUS    MEMBERS   EXTERNAL-ADDRESSES
hazelcast-sample   Running   3/3

You can use the following command for the long format.

  • Kubernetes

  • Openshift

kubectl get hazelcast hazelcast-sample -o=yaml
oc get hazelcast hazelcast-sample -o=yaml
status:
  hazelcastClusterStatus:
    readyMembers: 3/3
  phase: Running

The phase field represents the current status of the cluster, and can contain any of the following values:

  • Running: The cluster is up and running.

  • Pending: The cluster is in the process of starting.

  • Failed: An error occurred while starting the cluster.

Any additional information such as validation errors will be provided in the message field.

The readyMembers field represents the number of Hazelcast members that are connected to the cluster.

Use the readyMembers field only for informational purposes. This field is not always accurate. Some members may have joined or left the cluster since this field was last updated.

Step 4. Start Management Center

You can monitor the Hazelcast cluster by starting Management Center.

  • Open Source

  • Enterprise

  1. Create the ManagementCenter custom resource file and name it management-center.yaml.

    apiVersion: hazelcast.com/v1alpha1
    kind: ManagementCenter
    metadata:
      name: managementcenter-sample
    spec:
      repository: 'hazelcast/management-center'
      version: '5.2.1'
      externalConnectivity:
        type: LoadBalancer
      hazelcastClusters:
        - address: hazelcast-sample
          name: dev
      persistence:
        enabled: true
        size: 10Gi
StatefulSet does not support updates to volumeClaimTemplates field, so persistence field should be set only at the creation of the custom resource. Any update to the persistence field will not affect the Management Center.
By default, Management Center data is persisted to the /data directory. If you want to use an existing PersistentVolumeClaim, set its name in the `.spec.persistence.existingVolumeClaimName`field in the Management Center custom resource.
hazelcastClusters field does not support deleting clusters from the custom resource. If you want to remove a cluster from the Management Center, you can do it from the Management Center UI.
  1. Apply it with the following command to start Management Center.

    For Kubernetes
    kubectl apply -f management-center.yaml
    For Openshift
    oc apply -f management-center.yaml
  2. After a moment, you can verify that Management Center is up and running by checking the Management Center logs.

    For Kubernetes
    kubectl logs pod/managementcenter-sample-0
    For Openshift
    oc logs pod/ -sample-0
2021-08-26 15:21:04,842 [ INFO] [MC-Client-dev.lifecycle-1] [c.h.w.s.MCClientManager]: MC Client connected to cluster dev.
2021-08-26 15:21:05,241 [ INFO] [MC-Client-dev.event-1] [c.h.w.s.MCClientManager]: Started communication with member: Member [10.36.8.3]:5701 - ccf31703-de3b-4094-9faf-7b5d0dc145b2
2021-08-26 15:21:05,245 [ INFO] [MC-Client-dev.event-1] [c.h.w.s.MCClientManager]: Started communication with member: Member [10.36.7.2]:5701 - e75bd6e2-de4b-4360-8113-040773d858b7
2021-08-26 15:21:05,251 [ INFO] [MC-Client-dev.event-1] [c.h.w.s.MCClientManager]: Started communication with member: Member [10.36.6.2]:5701 - c3d105d2-0bca-4a66-8519-1cacffc05c98
2021-08-26 15:21:07,234 [ INFO] [main] [c.h.w.Launcher]: Hazelcast Management Center successfully started at http://localhost:8080/

To enable some features or dashboard at Management Center, you need a license key. If you don’t have a license key, you can request one from the Hazelcast website.

  1. Create a Kubernetes secret to hold your license key.

    For Kubernetes
    kubectl create secret generic hazelcast-license-key --from-literal=license-key=<YOUR LICENSE KEY>
    For Openshift
    oc create secret generic hazelcast-license-key --from-literal=license-key=<YOUR LICENSE KEY>
  2. Create the ManagementCenter custom resource file and name it management-center.yaml.

    apiVersion: hazelcast.com/v1alpha1
    kind: ManagementCenter
    metadata:
      name: managementcenter-sample
    spec:
      repository: 'hazelcast/management-center'
      version: '5.2.1'
      licenseKeySecret: hazelcast-license-key
      externalConnectivity:
        type: LoadBalancer
      hazelcastClusters:
        - address: hazelcast-sample
          name: dev
      persistence:
        enabled: true
        size: 10Gi
StatefulSet does not support updates to volumeClaimTemplates field, so persistence field should be set only at the creation of the custom resource. Any update to the persistence field will not affect the Management Center.
By default, Management Center data is persisted to the /data directory. If you want to use an existing PersistentVolumeClaim, set its name in the `.spec.persistence.existingVolumeClaimName`field in the Management Center custom resource.
hazelcastClusters field does not support deleting clusters from the custom resource. If you want to remove a cluster from the Management Center, you can do it from the Management Center UI.
  1. Apply it with the following command to start Management Center.

    For Kubernetes
    kubectl apply -f management-center.yaml
    For Openshift
    oc apply -f management-center.yaml
  2. After a moment, you can verify that Management Center is up and running by checking the Management Center logs.

    For Kubernetes
    kubectl logs pod/managementcenter-sample-0
    For Openshift
    oc logs pod/managementcenter-sample-0
2021-08-26 15:21:04,842 [ INFO] [MC-Client-dev.lifecycle-1] [c.h.w.s.MCClientManager]: MC Client connected to cluster dev.
2021-08-26 15:21:05,241 [ INFO] [MC-Client-dev.event-1] [c.h.w.s.MCClientManager]: Started communication with member: Member [10.36.8.3]:5701 - ccf31703-de3b-4094-9faf-7b5d0dc145b2
2021-08-26 15:21:05,245 [ INFO] [MC-Client-dev.event-1] [c.h.w.s.MCClientManager]: Started communication with member: Member [10.36.7.2]:5701 - e75bd6e2-de4b-4360-8113-040773d858b7
2021-08-26 15:21:05,251 [ INFO] [MC-Client-dev.event-1] [c.h.w.s.MCClientManager]: Started communication with member: Member [10.36.6.2]:5701 - c3d105d2-0bca-4a66-8519-1cacffc05c98
2021-08-26 15:21:07,234 [ INFO] [main] [c.h.w.Launcher]: Hazelcast Management Center successfully started at http://localhost:8080/

To access the Management Center dashboard, open the browser at address http://$MANCENTER_IP:8080.

  • Kubernetes

  • Openshift

MANCENTER_IP=$( kubectl get service managementcenter-sample -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
MANCENTER_IP=$( oc get service managementcenter-sample -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

If EXTERNAL-IP of the service is hostname, not IP, you can run command below:

  • Kubernetes

  • Openshift

MANCENTER_IP=$( kubectl get service managementcenter-sample -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
MANCENTER_IP=$( oc get service managementcenter-sample -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

Step 5. Clean up

You can run the commands below to remove the Hazelcast cluster and Management Center.

  • Kubernetes

  • Openshift

kubectl delete -f hazelcast.yaml
kubectl delete -f management-center.yaml
oc delete -f hazelcast.yaml
oc delete -f management-center.yaml

If you installed Hazelcast Enterprise, run the following commands to remove Hazelcast Enterprise cluster and Hazelcast License Key Secret.

  • Kubernetes

  • Openshift

kubectl delete -f hazelcast-enterprise.yaml
kubectl delete secret hazelcast-license-key
oc delete -f hazelcast-enterprise.yaml
oc delete secret hazelcast-license-key

Finally, run the command below to delete the Hazelcast Platform Operator deployment.

helm uninstall operator

If you installed the CRDs separately from the operator, you need to remove them by running the following command:

helm uninstall operator-crds

Next Steps

Learn how to expose Hazelcast clusters outside Kubernetes so you can connect external clients to them.