This is a prerelease version.

View latest

Overview

Testing is essential to validate the correctness, reliability, and performance of applications that integrate with Hazelcast.

Given Hazelcast’s distributed nature and its capabilities for caching, streaming, and distributed execution, tests should cover not just business logic using data structures (IMap, Pipeline, etc.), but also custom logic expected to be executed within the Hazelcast cluster (MapStore, EventListener, etc).

During application development, tests should run continuously, in a fast and reliable manner, both in the local environment and in CI/CD.

Hazelcast applications typically include:

  • Application-managed logic, executed on local threads, that accesses Hazelcast replicated data structures.

  • Hazelcast-managed logic (such as entry processors, jobs, or listeners) executed on distributed nodes.

In both cases, a set of configuration options is defined (either programmatically or via configuration file) to spec the cluster and wire up the custom logic.

Types of tests

Tests should be written at multiple levels. Each level addresses different aspects of the system and helps identify different categories of problems.

  • Unit tests validate small pieces of functionality in isolation. They are used to test individual classes or methods, often using mocks or stubs to isolate the system under test. These tests are fast, deterministic, and designed to catch logic errors and enforce the contract of individual units.

  • Component tests validate the interaction between a single unit (such as a service or data handler) and its direct dependencies. These tests may use real instances of dependencies, such as an in-memory Hazelcast instance or a MapStore, to verify correct integration points like persistence, data loading, or listener execution.

  • Integration tests validate the interaction between multiple components or services. They are typically executed in environments with multiple Hazelcast members or clients, and validate cross-cutting behavior like data replication, failover handling, and distributed execution logic.

  • Performance and benchmarking tests are used to validate how the system behaves under load. These tests focus on metrics like latency, throughput, and resource utilization.

Hazelcast Simulator can be used to simulate real-world distributed environments and inject failures or network conditions, enabling realistic performance verification in simulated production-like deployment.

APIs and tools overview

Hazelcast provides a set of APIs and test utilities that can be used at each of these levels. These include the classes HazelcastTestSupport and TestHazelcastFactory for in-JVM multi-node testing of distributed data structures, JetTestSupport for testing of streaming pipelines, a set of static assertion methods to help verify synchronous and asynchronous code, and annotations to configure test execution parallelism.

Hazelcast also provides Hazelcast Simulator, a standalone tool for full-scale benchmarking, fault injection, and capacity testing.

Testing with a remote cluster

The testing approach described in this documentation is complementary to testing against a remote Hazelcast cluster. It does not aim to replace remote or system-level integration testing, but rather to reduce the reliance on it during most stages of development.

By using the Hazelcast-provided testing utilities covered here, you can validate the majority of your application logic locally, without requiring a network connection to a running Hazelcast cluster. This results in tests that are:

  • Faster to execute.

  • Simpler to configure.

  • Less brittle, due to fewer external dependencies.

  • More deterministic, reducing the chance of intermittent failures caused by environmental conditions.

Testing using embedded instances is therefore recommended for the bulk of tests validating the application integration logic with the cluster. Complex tests requiring infrastructure setup (such as provisioning remote nodes and securing connections) can be minimized and deferred to a later stage.

Hazelcast test support versus Hazelcast embedded

Hazelcast can be started in embedded mode by calling Hazelcast.newHazelcastInstance(). While this is fully functional and equivalent to production use, startup and shutdown operations are much slower because their successful execution requires a fully configured network stack.

Test support classes provide an alternative approach. When members are started using factory classes, such as TestHazelcastFactory, they use a mock network stack that doesn’t require TCP/IP binding and port allocation.

This approach is ideal for unit, component, and integration testing where speed and isolation are more important than simulating full network behavior. For scenarios where real network interactions must be validated (e.g., WAN replication, TLS configuration), embedded mode or remote cluster testing remains the preferred choice.

Members in both cases are functionally equivalent and all core Hazelcast features behave the same in both modes.

Hazelcast data structures and mocking frameworks

Mocking frameworks such as Mockito are commonly used in unit tests to isolate the class under test by replacing its dependencies with controlled substitutes, or mocks. When a class depends on a Hazelcast data structure, such as an IMap, it may seem convenient to mock the interface directly.

This approach is not recommended. Mocking Hazelcast interfaces should be done only in exceptional cases. It is considered an antipattern to mock interfaces that your code does not own. See paragraph 4.1 of this paper for further details.

However, in some cases mocking can be beneficial, for example, when testing code paths that depend on Hazelcast event listeners, verifying wiring and configuration, or exercising behaviors that are otherwise difficult to reproduce in a lightweight unit test. These situations should be treated as exceptions, and for most scenarios a real in-JVM Hazelcast instance (via the test support factories) provides a more reliable mechanism for unit testing.

Mocking external APIs like Hazelcast’s can introduce several problems:

  • Brittleness: Mocks rely on internal implementation details. If Hazelcast modifies an interface or its behavior in a future release, tests may break even if application logic remains valid.

  • Blind spots: Mocked interfaces bypass Hazelcast’s actual runtime behavior, resulting in tests that do not validate important production characteristics. Specifically, mocking Hazelcast may bypass:

    • Serialization and deserialization, which is critical in distributed environments.

    • Key and value validation logic, including null checks and type handling.

    • Eviction and TTL policies, which are applied by the runtime and affect data availability.

    • Listeners and interceptors, which are executed in response to changes or accesses.

Use mocks only for interfaces you control, or for boundary conditions not possible to simulate with real instances.

Next steps

The following sections detail how to use these tools: