Deploy a Hazelcast Cluster in the Cloud using Terraform

What You’ll Learn

Deploy a Hazelcast cluster and Hazelcast Management Center on AWS, Azure, or GCP using Terraform.

Terraform files have the necessary resources defined and all you need to do is set your credentials to give Terraform permissions for creating resources on your behalf. After you run Terraform and create a cluster on cloud, you will be able to monitor the cluster using Hazelcast Management Center. You can modify the Terraform files to create new resources or destroy the whole cluster.

Before you Begin

  • Terraform v0.13+

  • Access to one of AWS, Azure or GCP. The account must have permissions to create resources.

Giving Access to Terraform

Cloud providers offer different ways of authenticating Terraform to create resources. Below you can see some of them.

  • AWS

  • Azure

  • GCP

You can set environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Terraform will use these environment variables to create resources. Run the following commands.

$ export AWS_ACCESS_KEY_ID="XXXXXXXXXXXX"
$ export AWS_SECRET_ACCESS_KEY="XXXXXXXXXXXXXXXX"

You can find other ways of providing credentials at Terraform AWS authentication.

The account you use in Terraform should have the following permissions assigned to it:

"iam:CreateRole",
"iam:DeleteRole",
"iam:CreateInstanceProfile",
"iam:DeleteInstanceProfile",
"iam:DeleteRolePolicy",
"iam:AddRoleToInstanceProfile",
"iam:RemoveRoleFromInstanceProfile",
"iam:PutRolePolicy",
"iam:PassRole",
"iam:GetRole",
"iam:GetInstanceProfile",
"iam:GetPolicy",
"iam:GetRolePolicy"
"ec2:*"

The iam permissions are necessary to create and delete a role using Terraform in aws/main.tf. Created role aws_iam_role.discovery_role will be used by Hazelcast instances to find each other.

If you are using a user account, you can login with Azure CLI. Run the following command to authenticate. Terraform will be able to detect your account.

$ az login

If you have multiple subscriptions or tenants you can choose one by adding following lines to azure/main.tf.

provider "azurerm" {
  version = "=2.23.0"
  features {}

  subscription_id = "00000000-0000-0000-0000-000000000000"
  tenant_id       = "11111111-1111-1111-1111-111111111111"

}

If you want to authenticate through service principals, please refer to Authenticating to Azure.

The account or service principal you use should have the role Owner assigned to it.

You can use service accounts to authenticate Terraform. Get a service account key file, you can create key files on Google Console. After you have created a key file, put its path in gcp/main.tf as follows.

provider "google" {
  version = "3.5.0"

  credentials = file("KEY-FILE-PATH/YOUR-KEY-FILE.json")
  batching {
    enable_batching = "false"
  }
  project = var.project_id
  region  = var.region
  zone    = var.zone
}

The service account you use should have the following roles assigned to it:

Compute Admin
Role Administrator
Security Admin
Service Account Admin
Service Account User

Configuring Terraform for Connection

Now that Terraform has access to your credentials, we need to supply some variables to configure Terraform so that resources can be created correctly.

Terraform will need a public-private key pair to be able to provision files and execute commands on the remote machines. For this purpose, you can use one of your existing key pairs or create a new one with the following command:

$ ssh-keygen -f ~/.ssh/YOUR-KEY-NAME -t rsa

This command will create two key files: YOUR-KEY-NAME.pub and YOUR-KEY-NAME, public and private keys respectively. Terraform will use them to access VMs.

  • AWS

  • Azure

  • GCP

In aws/terraform.tfvars, you need to provide values for two variables.

  • aws_key_name: This is the name of the public-private key pair we created earlier.

  • local_key_path: This is the path we created the key pair at.

In this tutorial we use an Ubuntu image and AWS creates a user with name ubuntu by default. So if you want to connect to your VMs via ssh you will have to use ubuntu. If you want to use another Linux distribution, please refer to AWS EC2 Managing users and change the variable aws_ssh_user accordingly.
The configuration defined in aws/main.tf assumes you have a default VPC in the region defined in var.region. AWS creates a default VPC for each activated region. If you didn’t delete it you can skip this note. Otherwise, please refer to AWS Creating Default VPC.

In azure/terraform.tfvars, you need to provide values for two variables.

  • azure_key_name: This is the name of the public-private key pair we created earlier.

  • local_key_path: This is the path we created the key pair at.

In gcp/terraform.tfvars, you need to provide values for three variables.

  • gcp_key_name: This is the name of the public-private key pair we created earlier.

  • local_key_path: This is the path we created the key pair at.

  • project_id: This is the id of the project you will use.

Deploying the Cluster

After you have authenticated your preferred cloud provider and provided necessary variables, cd into the directory of that provider.

If you are using a paid subscription, you may be charged for the resources that will be created in this tutorial. However you can complete the tutorial using free tier subscriptions provided by AWS, Azure and GCP.

Initialize Terraform.

$ terraform init

Run the following to create an execution plan. This command will not create any resources but only show what actions Terraform will perform to reach the desired state defined in Terraform files.

$ terraform plan

Apply your Terraform configuration. It should take a couple of minutes.

$ terraform apply

After the resources are created, the output should be similar to following:

mancenter_public_ip = 3.92.204.153
members_public_ip = [
  "3.82.226.227",
  "3.87.211.122",
]

Now you deployed 2 Hazelcast cluster members and a Hazelcast Management Center. You can monitor the state of your cluster from the following address:

mancenter_public_ip:8080

You can change the input variables in variables.tf file by updating terraform.tfvars. After your changes the new desired state will be applied by terraform apply. You can use ssh to examine VMs by using the IPs provided in the output of terraform apply. If you cannot find the outputs you can run 'terraform show' to see the current state of your configuration.

When you are done with the tutorial, run the following to delete all the resources created.

$ terraform destroy

Summary

In this tutorial, you used Terraform to create Hazelcast cluster on cloud. You defined the state we wanted in main.tf and Terraform applied the desired state on the cloud provider. Then, you used Hazelcast Management Center to monitor the state of the cluster. You changed the desired state by updating terraform.tfvars file and Terraform applied the changes when by running terraform apply.