Working with Cluster-Side Modules in Cloud
Cluster-side modules are Java classes that a cluster can execute or store in the cloud. You can write cluster-side modules to execute custom code or store custom objects. This guide describes the workflows and best practices for implementing cluster-side modules.
The basic workflow for any cluster-side module is the following:
-
Build the Java class and implement any required interfaces.
-
Test your implementation on a development cluster.
-
Deploy your Java classes to your cluster.
Then, you can use any client to interact with the cluster-side modules such as querying objects or triggering processes.
What are Cluster-Side Modules
Cluster-side modules include any code that must be deployed as Java code to the cluster, including the following:
Module | Description | ||
---|---|---|---|
Load additional and updated data from an external data source to your Hazelcast map, and propogate updates in your Hazelcast map to an external data source.
|
|||
Atomically execute code on a map entry. You cannot use a custom name for entry processors in Cloud. You must use the one called |
|||
Asynchronously execute tasks, such as database queries, complex calculations, and image rendering. You cannot use a custom name for executors in Cloud. You must use the one called |
|||
Custom object class (domain objects) |
Allow clusters to store or process domain objects that you use in your applications. |
||
Process streaming or batch data in Cloud, using one or more data sources and sinks. |
Supported Serializers
Cluster-side modules must be serialized because they are sent to other members in the cluster and/or your client applications. Cloud supports the following serializers. Follow the links to learn more about these serializers in the Hazelcast Platform documentation.
Serializer | Advantages | Disadvantages | Client support |
---|---|---|---|
|
|
All clients |
|
|
|
All clients |
|
|
|
Java only |
|
|
|
Java only |
Requirements and Limitations
Any Java code in cluster-side modules must meet the following requirements.
System-Level APIs
No code in cluster-side modules is allowed to access system-level APIs. Hazelcast throws a Java security exception if any cluster-side modules try to access these APIs.
Package Names
Your package names must not start with com.hazelcast
. Hazelcast ignores these packages.
Dependency Conflicts
The libraries in your project must not conflict with those that are built into Cloud. If your project includes a library that’s already available in Cloud, you may experience conflicts that stop Hazelcast from processing your cluster-side modules. To avoid packaging conflicting libraries in your code, we recommend using one of the following options:
-
Shaded JARs: When you shade JARs, the classes inside them are relocated and rewritten to create a private copy of them that is bundled alongside your code. To learn more about shaded JARs, see this answer on Stack Overflow.
-
Provided scope: When you use the provided scope for a JAR, it is available in your classpath during compilation but will not be packaged with your project archive. To learn more about provided scope, see the docs for your build tool:
Best Practices for Testing
Before you go into production with your cluster-side modules, it’s best to test them on a development cluster to make sure that they work as expected. To test cluster-side modules, follow these best practices:
-
Use a development cluster: It’s faster to test cluster-side modules in a development cluster.
-
Use the Cloud Maven plugin: The Maven plugin allows you to package and deploy your cluster-side modules in a single command from your IDE. You can also debug your cluster-side modules by streaming cluster logs after deployment.
Moving to Production
After testing your cluster-side modules, you need to deploy them to production.
For production, you can deploy your cluster-side modules to a production cluster on either Cloud Standard or Cloud Dedicated.
You must provide a payment card or other payment method to run more than one Cloud cluster. |