Use Cloud Standard as a Write-Through Cache with MongoDB Atlas

In this tutorial, you’ll build an application that writes changes made to a map back to MongoDB Atlas.


In a write-through cache, data is updated in both the cache and the data source together. This caching pattern is useful when writes to the cache are infrequent.

To build a write-through cache in Hazelcast, you can use the MapStore API. The MapStore interface includes methods that are triggered when operations are invoked on a map. You can implement your own logic in these methods to connect to an external data store, load data from it, and write data back to it. For example, you can use a MapStore to load data from a MongoDB, MySQL, or PostgreSQL database.

A MapStore connecting a database to a Hazelcast Cluster

In this tutorial, you’ll deploy the following to a Cloud Standard cluster:

  • A MapStore that connects to a MongoDB Atlas database and is integrated into the lifecycle of a map.

  • A Person class that you will store in the cache.

You’ll then use SQL to write new Person objects to a Hazelcast map, at which point the MapStore will replicate those objects to the MongoDB Atlas database.

The code in this tutorial is available as a sample app on GitHub.

Before you Begin

You’ll need the following to complete this tutorial:

Step 1. Create a MongoDB Atlas Database

You need a database that your cluster can connect to and store your data in. MongoDB Atlas is a multi-cloud database service that’s quick, free, and easy to set up without any payment details.

If you already have a MongoDB database that is accessible to the Internet, you can skip this step. Otherwise, see the following instructions.

  1. Sign up to a MongoDB Atlas account if you don’t have one already.

  2. Create an organization and a new project inside that organization.

  3. Click Build a Database and select the free Shared option.

  4. Select the AWS cloud provider and the region that’s closest to the one that you chose when you created your Cloud Serverless cluster. The closer the database is to Serverless, the faster the connection.

  5. Leave the other options as the defaults and click Create Cluster.

  6. Enter a username and password for your database and click Create User.

    Make a note of these credentials because you will need to give them to Cloud Standard later.
  7. Go to Database in the left navigation under Deployment.

  8. Go to the Collections tab, click Add My Own Data, and create a database called Hazelcast with a collection called person.

    Your Cloud Standard cluster will store your data in this person collection.

    Set up a database on MongoDB

  9. Go to Network Access in the left navigation under Security, and click Add IP Address.

  10. Select Allow Access From Anywhere and click Confirm.

Now your database is ready. Your Cloud Standard cluster will be able to connect to it and write data to your collection.

Step 2. Clone the Sample Project

All the code for this project is available on GitHub. In this step, you’ll clone the project and learn how it works.

Clone the GitHub repository.


  • SSH

git clone

cd write-through-cache-mongodb-mapstore
git clone

cd write-through-cache-mongodb-mapstore

The sample code for this tutorial is in the src/main/java/sample/com/hz/demos/mapstore/mongo/ directory:

This class is for the objects that you will store in a Hazelcast map and that will be replicated to Atlas.

The Person class uses the Java Serializable interface as a serializer so that Hazelcast can send it to clients and other members as well as deserialize it.

To allow you to query the object fields with SQL, the fields include getters and setters. Although, you can also make the fields public.

public class Person implements Serializable {

    private Integer id;

    private String name;

    private String lastname;

    public String getLastname() {
        return lastname;

    public void setLastname(String lastname) {
        this.lastname = lastname;

    public String getName() {
        return name;

    public void setName(String name) { = name;

    public Integer getId() {
        return id;

    public void setId(Integer id) { = id;


This class declares methods for interacting with the MongoDB database, using the MongoDB Java driver. These methods are invoked by the MapStore, which is defined in the file.

public class MongoPersonRepository implements PersonRepository {

    private final String name;

    private final MongoDatabase db;

    public void save(Person person) {
        MongoCollection<Document> collection = db.getCollection(;
        Document document = new Document("name", person.getName())
            .append("lastname", person.getLastname())
            .append("id", person.getId());
        collection.replaceOne(Filters.eq("id", person.getId()), document, new ReplaceOptions().upsert(true));

    public void deleteAll() {
        MongoCollection<Document> collection = db.getCollection(;
        collection.deleteMany(new Document());

    public void delete(Collection<Integer> ids) {
        MongoCollection<Document> collection = db.getCollection(;
        collection.deleteMany("id", ids));

    public List<Person> findAll(Collection<Integer> ids) {
        List<Person> persons = new ArrayList<>();
        MongoCollection<Document> collection = db.getCollection(;
        try (MongoCursor<Document> cursor = collection.find("id", ids)).iterator()) {
            while (cursor.hasNext()) {
                Document document =;
                    .id(document.get("id", Integer.class))
                    .name(document.get("name", String.class))
                    .lastname(document.get("lastname", String.class))
        return persons;

    public Collection<Integer> findAllIds() {
        Set<Integer> ids = new LinkedHashSet<>();
        MongoCollection<Document> collection = db.getCollection(;
        try (MongoCursor<Document> cursor = collection.find().projection(Projections.include("id")).iterator()) {
            while (cursor.hasNext()) {
                Document document =;
                ids.add(document.get("id", Integer.class));
        return ids;


This class declares the trigger methods for the MapStore that will work with a map that has integer keys and Person object values.

The class implements the MapLoaderLifecycleSupport interface to manage the connection to the database, using the init() and destroy() methods.

The init() method reads the database connection credentials from the Java properties. You’ll provide these in the Cloud console when you configure the MapStore.

public class MongoPersonMapStore implements MapStore<Integer, Person>, MapLoaderLifecycleSupport {

    private MongoClient mongoClient;

    private PersonRepository personRepository;
    public void init(HazelcastInstance hazelcastInstance, Properties properties, String mapName) {
        this.mongoClient = new MongoClient(new MongoClientURI(properties.getProperty("uri")));
        MongoDatabase database = this.mongoClient.getDatabase(properties.getProperty("database"));
        this.personRepository = new MongoPersonRepository(mapName, database);"MongoPersonMapStore::initialized");

    public void destroy() {
        MongoClient mongoClient = this.mongoClient;
        if (mongoClient != null) {

    public void store(Integer key, Person value) {"MongoPersonMapStore::store key {} value {}", key, value);

    public void storeAll(Map<Integer, Person> map) {"MongoPersonMapStore::store all {}", map);
        for (Map.Entry<Integer, Person> entry : map.entrySet()) {
            store(entry.getKey(), entry.getValue());

    public void delete(Integer key) {"MongoPersonMapStore::delete key {}", key);

    public void deleteAll(Collection<Integer> keys) {"MongoPersonMapStore::delete all {}", keys);

    public Person load(Integer key) {"MongoPersonMapStore::load by key {}", key);
        return getRepository().find(key).orElse(null);

    public Map<Integer, Person> loadAll(Collection<Integer> keys) {"MongoPersonMapStore::loadAll by keys {}", keys);
        return getRepository().findAll(keys).stream()
            .collect(Collectors.toMap(Person::getId, Function.identity()));

    public Iterable<Integer> loadAllKeys() {"MongoPersonMapStore::loadAllKeys");
        return getRepository().findAllIds();

    private PersonRepository getRepository() {
        PersonRepository personRepository = this.personRepository;
        if (personRepository == null) {
            throw new IllegalStateException("Person Repository must not be null!");
        return this.personRepository;


Step 3. Deploy the Classes to the Cluster

In this step, you’ll use the Hazelcast Cloud Maven plugin to package the project into a single JAR file and upload that file to your cluster.

  1. Open the pom.xml file.

  2. Configure the Maven plugin with values for the following elements:

    Element Location in the Cloud console


    Next to Connect Client, select any client, and then go to Advanced Setup. The cluster name/ID is at the top of the list.

    <apiKey> and <apiSecret>

    You can create only one API key and secret pair on your account. If you need to change your API credentials, you must first remove your existing credentials, and then create new credentials.

    To create a set of API credentials, do the following:

    1. Sign into the Cloud console

    2. Select Account from the side navigation bar

    3. Select Developer from the Account options

      The Developer screen displays.

    4. Select the Generate New API Key button

    Use these credentials in your applications to manage all clusters in your account.

    After creating your credentials, ensure that you applications are correctly configured to use the API credentials.

  3. Execute the following goal of the Maven plugin to package the project into a JAR file and deploy that file to your Cloud Standard cluster:

    mvn clean package hazelcast-cloud:deploy

You should see that the file was uploaded and is ready to be used.


Step 4. Configure the MapStore

To use a MapStore, it must be configured for a specific map in your Cloud Standard cluster. When a map is configured with a MapStore, Hazelcast plugs the MapStore implementation into the lifecycle of the map so that the MapStore is triggered when certain map operations are invoked.

  1. Open the Cloud console.

  2. Click Add + under Data Structures on the cluster dashboard.

  3. Click Add New Configuration in the top right corner.

  4. Enter person in the Map Name field.

  5. In the bottom left corner, select Enable MapStore.

  6. Select the classname from the dropdown. If you don’t see a classname, you need to deploy the MapStore to your Cloud Standard cluster.

  7. Leave the Write Delay Seconds field set to 0 to configure a write-through cache.

    In this configuration, when you add new entries to the person map, Hazelcast will invoke the MapStore’s store() method before adding the entry to the map.

  8. In the Properties section, enter the following.

    Hazelcast encrypts these properties to keep them safe.

    Name Value


    Your MongoDB connection string. You can find this connection string in the Atlas console.


    The name of the database that you created in step 1.

  9. Click Save Configuration.

Step 5. Test the MapStore

To test that the MapStore is working, you need to create a map, add some entries to it, and check that those entries are replicated to your MongoDB database.

  1. Sign into the Cloud console.

  2. Click Management Center.

  3. Click SQL Browser in the toolbar at the top of the page.

  4. Execute the following queries to set up a connection to the map and add an entry to it.

    TYPE IMap
        'keyFormat' = 'int',
        'valueFormat' = 'java',
        'valueJavaClass' = '')
      1, 1, 'Cahill', 'Jake'
  5. Open your database in MongoDB Atlas and click on the person collection.

    You should see that the collection has a new document, which contains the same data as the map entry you added to the map in Serverless.

    A person

If your test fails, do the following:

  1. Remove the MapStore class.

  2. Update the MapStore implementation or the configuration and start the process again.


In this tutorial, you learned how to do the following:

  • Use a Mapstore to write changes made in a Cloud Standard cluster back to a MongoDB Atlas database.

  • Deploy the MapStore to a Cloud Standard cluster, using the Hazelcast Cloud Maven plugin.

  • Trigger the MapStore by adding data to a map, using SQL.