Upgrading from IMDG 3.12.x
This section lists the distribution, documentation and API changes for you to be aware when you have been using IMDG 3.12.x and want to use Hazelcast Platform.
See also Upgrading from IMDG and Jet Version 4.x to learn about the changes that need to be considered while upgrading from IMDG 4.x to Hazelcast Platform. |
Hazelcast offers tools and features for a smooth migration from 3.12 to Platform 5.0. See Migrating Data from IMDG 3.12.x. |
Hazelcast Platform is a major version release. Major releases allow us to break compatibility in the wire protocols and API, as well as removing the previously deprecated API.
As breaking changes have been made to the client and cluster member protocols, it is not possible to perform any in-place or rolling upgrade from a running IMDG 3.12.x cluster to Platform 5.0. The only way to upgrade to Platform 5.0 is to completely shutdown the cluster.
Removal of Hazelcast Client Module
-
The
hazelcast-client
module has been merged into the core module: All the classes in thehazelcast-client
module have been moved tohazelcast
.hazelcast-client.jar
will not be created anymore. -
Also, the
com.hazelcast.client
Java module is not used anymore. All classes are now available within thecom.hazelcast.core
module.
JCache default Caching Provider
The default CachingProvider
is the client-side CachingProvider
. In order to select the
member-side CachingProvider
, you can specify the member-side CachingProvider
by defining
the Hazelcast property hazelcast.jcache.provider.type
. See the
Configuring JCache Provider section for more details.
Removal of User Defined Services
The public SPI (Service Provider Interface) which was known as User Defined Service has been removed. It was not simple enough and backwards compatibility was broken. A new and clearly defined SPI may be developed in the future if there is enough interest. The removed SPI’s classes will be kept to be used internally.
Changes in Client Connection Retry Mechanism
-
The
connection-attempt-period
andconnection-attempt-limit
configuration have been removed. Instead, the elements ofconnection-retry
are now used. See Configuring Client Connection Retry for the usage of those new elements.
Increasing the Member/Client Thread Counts
If there are 20 or more processors detected, the Hazelcast member by default starts 4+4 (4 input and 4 output) I/O threads. This is to increase out of the box performance on faster machines because often (especially the cache with caching situations) the performance is I/O bound and having some extra cores available for I/O can make a significant difference. If less than 20 cores are detected, 3+3 IO threads are used and the behavior remains the same as Hazelcast IMDG 3.x series.
A smart client, by default, gets 3+3 (3 input and 3 output) I/O threads to speed up the performance. Previously, this was 1+1. However, the client I/O can become a bottleneck with too few threads. If TLS/SSL is enabled, then by default a smart client makes use of 3+3 I/O threads which was already the case with previous versions.
There has been a new performance feature since IMDG 4.0 called
thread overcommit. By default, Hazelcast creates more
threads than it has cores, e.g., on a 20 cores machine it creates 28 threads;
20 threads for the partition operations
and 4+4 threads for I/O. In case of a typical caching usage (get/put/set, etc.)
having too many threads can cause a performance
degradation due to increased context switching. So there is
a new option called hazelcast.operation.thread.overcommit
.
If this property is set to true, i.e., -Dhazelcast.operation.thread.overcommit=true
,
which is the default, Hazelcast uses the old style thread
configuration where there are more threads than cores. If set to false,
the number of partition threads plus the I/O threads will be equal to the core count.
It depends on the environment if this gives a performance boost or not.
In some environments it can give a significant boost
and in some it will give a significant loss; it is best to benchmark
for your specific situation. If you are doing lots of queries or other tasks
which are CPU-bound, e.g., aggregations, you probably want to have as many cores available to partition
operations as possible.
See Threading Model for more information on Hazelcast’s threading model.
Optimizing for Single Threaded Usages
A write-through optimization has been performed. This helps to reduce the latency in case of single threaded usages.
Normally, when a request is made, the request is handed over the I/O system where an I/O thread takes care of sending it over the wire. This is great for throughput, but in case of single threaded setups, it adds to the latency and therefore it reduces the throughput because threads need to be notified.
Hazelcast now detects the single threaded usage and tries to write through to the socket directly instead of handing it over to the I/O thread; this optimization is called "write-through".
This technique is being applied on the client, but also on the member. We have something similar when responses are received: normally a response is processed by the response thread, but in case of a single threaded usage, the response is processed on the I/O thread so we can remove a thread notification and therefore get higher throughput.
Both the write-through and response-through are enabled by default. If Hazelcast detects that there are many active threads, response- and write-through are disabled so it won’t cause a performance degradation.
Removing Deprecated Client Configurations
The following methods of ClientConfig have been refactored:
-
addNearCacheConfig(String, NearCacheConfig)
→addNearCacheConfig(NearCacheConfig)
-
setSmartRouting(boolean)
→getNetworkConfig().setSmartRouting(boolean);
-
getSocketInterceptorConfig()
→getNetworkConfig().getSocketInterceptorConfig();
-
setSocketInterceptorConfig(SocketInterceptorConfig)
→getNetworkConfig().setSocketInterceptorConfig(SocketInterceptorConfig);
-
getConnectionTimeout()
→getNetworkConfig().getConnectionTimeout();
-
setConnectionTimeout(int)
→getNetworkConfig().setConnectionTimeout(int);
-
addAddress(String)
→getNetworkConfig().addAddress(String);
-
getAddresses()
→getNetworkConfig().getAddresses();
-
setAddresses(List)
→getNetworkConfig().setAddresses(List);
-
isRedoOperation()
→getNetworkConfig().isRedoOperation();
-
setRedoOperation(boolean)
→getNetworkConfig().setRedoOperation(boolean);
-
getSocketOptions()
→getNetworkConfig().getSocketOptions();
-
setSocketOptions()
→getNetworkConfig().setSocketOptions(SocketOptions);
-
setSocketOptions()
→getNetworkConfig().setSocketOptions(SocketOptions);
-
getNetworkConfig().setAwsConfig(new ClientAwsConfig());
→getNetworkConfig().setAwsConfig(new AwsConfig());
Also, the ClientAwsConfig
class has been renamed as AwsConfig
The naming for the declarative configuration elements have not been changed.
See the following table for the before/after configuration samples.
3.12.x |
5.0 |
Adding Near Cache |
|
|
|
Programmatic Configuration |
|
|
|
Changes in Index Configuration
In order to support further extensibility of Hazelcast, index configuration has been refactored.
Index type is now defined through the IndexType
enumeration
instead of the boolean flag: ordered index is now referred to as
IndexType.SORTED
, unordered as IndexType.HASH
.
In composite indexes, index parts are now defined as a list of strings instead of a single string with comma-separated values.
With these changes, the following configuration parameters have been renamed:
Programmatic configuration objects and methods:
-
MapIndexConfig
→IndexConfig
-
MapConfig.getMapIndexConfig
→MapConfig.getIndexConfig
-
MapConfig.setMapIndexConfig
→MapConfig.setIndexConfig
-
MapConfig.addMapIndexConfig
→MapConfig.addIndexConfig
-
IMap.addIndex(String, boolean)
→IMap.addIndex(IndexConfig)
See the following table for the before/after samples.
3.12.x |
5.0 |
Programmatic Configuration |
|
|
|
Declarative Configuration |
|
|
|
Dynamic Index Create |
|
|
|
Changes in Custom Attributes
Custom attributes are referenced in predicates, queries and indexes. Some improvements have been performed in Hazelcast’s query engine and one of the results is the change in custom attribute configurations.
With this change, the following configuration parameters have been renamed:
Declarative configuration elements:
-
extractor
→extractor-class-name
Programmatic configuration objects and methods:
-
MapAttributeConfig
→AttributeConfig
-
setExtractor()
→setExtractorClassName()
-
addMapAttributeConfig()
→addAttributeConfig()
See the following table for the before/after samples.
3.12.x |
5.0 |
Programmatic Configuration |
|
|
|
Declarative Configuration |
|
|
|
Also, some custom query attribute classes were previously abstract classes with one abstract method. They have been converted into functional interfaces:
3.12.x |
5.0 |
Implementing |
|
|
|
Removal of MapReduce
MapReduce API has been removed, which was deprecated. Instead, you can use the aggregations on top of Query infrastructure and Hazelcast Jet engine distributed computing platform as its successors and replacements.
See the following table for the before(MapReduce)/after(Hazelcast Jet) word count sample.
3.12.x (MapReduce) |
5.0 (Jet Engine) |
Word Count Sample |
|
|
|
See the Word Count Task for a full insight.
Refactoring of Migration Listener
The MigrationListener
API has been refactored.
With this change, an event is published when a new
migration process starts and another event when migration
is completed. These events include statistics
about the migration process including the start time,
planned migration count, completed migration count, etc.
Additionally, a migration event is published on each replica migration, both for primary and backup replica migrations. This event includes the partition ID, replica index and migration progress statistics.
3.12.x, the following were the events listened by MigrationListener
:
-
migrationStarted
-
migrationCompleted
-
migrationFailed
5.0, we have the following events instead:
-
migrationStarted
-
migrationFinished
-
replicaMigrationCompleted
-
replicaMigrationFailed
See the following table for the before/after samples.
3.12.x |
5.0 |
Implementing a Migration Listener |
|
|
|
See the MigrationListener Javadoc for a full insight.
Defaulting to OpenSSL
Hazelcast defaults to use OpenSSL when:
-
when you use TLS/SSL and Hazelcast detects some OpenSSL capabilities
-
no explicit SSLEngineFactory is configured.
Changes in Security Configurations
Replacing group
by Simple Cluster Name Configuration
The GroupConfig
class has been removed. Both the client and member configurations have
the GroupConfig
(or <group>
in XML) replaced by a simple cluster name configuration.
The password part from the GroupConfig
which was already deprecated is removed now.
See the following table for the before/after sample configurations.
3.12.x |
5.0 |
Declarative Configuration |
|
|
|
Programmatic Configuration |
|
|
|
Member Authentication and Identity Configuration
Hazelcast IMDG 4.0 replaces the <member-credentials-factory>
, <member-login-modules>
and
<client-login-modules>
configuration by references to security realms.
The security realms is a new abstraction in the security configuration of Hazelcast members.
It defines the security configuration independently on the configuration
part where the security is used. The component requesting security just references
the security realm name.
See the following table for the before/after sample configurations.
3.12.x |
5.0 |
|
|
Client Identity Configuration
The <credentials>
configuration is not supported
anymore in the client security configuration.
Existing <credentials-factory>
configuration allows
to fully replace the credentials as it is more flexible.
There are also new <username-password>
and <token>
configuration elements which simplify the migration.
See the following table for the before/after sample configurations.
3.12.x |
5.0 |
|
|
JAAS Authentication Cleanups
Introducing New Principal Types
The ClusterPrincipal
class representing an authenticated user within the JAAS Subject
has been replaced by three different principal types:
-
ClusterIdentityPrincipal
-
ClusterRolePrincipal
-
ClusterEndpointPrincipal
All these new principal types share the HazelcastPrincipal
interface so
it is simple to get or remove them all from the subject.
With this change, the Credentials
object is not referenced from
the principals anymore.
Also, DefaultPermissionPolicy
which was consuming ClusterPrincipal
and also reading the endpoint address from it works with the new
ClusterRolePrincipals
and ClusterEndpointPrincipals
principal types.
See the following table for the before/after sample IPermissionPolicy implementations.
3.12.x |
5.0 |
|
|
Changes in ClusterLoginModule
ClusterLoginModule
in Hazelcast IMDG 3.12.x contained four
abstract methods to alter the behavior of LoginModule
:
-
onLogin
-
onCommit
-
onAbort
-
onLogout
The login module was retrieving Credentials
and
using it to create the ClusterPrincipal
back then.
In Hazelcast IMDG 4.0, only onLogin
is abstract.
Others now have empty implementations. The login module creates
ClusterEndpointPrincipal
automatically and adds it to the Subject
.
The getName()
abstract method has been added. It is used for
constructing ClusterIdentityPrincipal
. The addRole(String)
method
can be called by the child implementations to add ClusterRolePrincipals
with the given name.
Also, ClusterLoginModule
introduces three login module options (boolean),
which allows skipping principals of a given type to the JAAS Subject
.
It allows, for instance, to have just one ClusterIdentityPrincipal
in the Subject
even if there are more login modules in the chain. These
options are:
-
skipIdentity
-
skipRole
-
skipEndpoint
.
See the following table for the before/after sample implementations.
3.12.x |
5.0 |
|
|
Changes in Credentials for Client Protocol
In Hazelcast IMDG 3.12.x, the custom credentials coming through
the client protocol was always automatically deserialized. To
avoid this, the Credentials
interface has been redesigned in
Hazelcast IMDG 4.0 to contain only the getName()
(renamed from getPrincipal()
) method.
The endpoint handling has been moved out of the interface.
Now, Credentials
has two new subinterfaces:
-
PasswordCredentials
: The existingUsernamePasswordCredentials
class is the default implementation. -
TokenCredentials
: The newSimpleTokenCredentials
class has been introduced to implement it.
TokenCredentials
is just a holder for byte array, and
the authentication implementations themselves, i.e., custom LoginModules
,
are responsible for the data deserialization when needed.
The data from client authentication message is not deserialized by Hazelcast members
anymore. For standard authentication, UsernamePasswordCredentials
is constructed.
For custom authentication, SimpleTokenCredentials
is constructed.
If the original Credentials
object is not a PasswordCredentials
or TokenCredentials
instance, then it can be deserialized manually.
However, the deserialization during authentication remains a dangerous
operation and should be avoided.
See the following table for the before/after sample implementations.
3.12.x |
5.0 |
|
|
Credentials serialization and deserialization in the member protocol
has not been changed.
|
Changes in JAAS Callbacks
In Hazelcast IMDG 3.x, the CallbackHandler
implementation ClusterCallbackHandler
was only able to work with Hazelcast’s CredentialsCallback
.
In Hazelcast IMDG 4.0, it also works with the standard Java Callback implementations
NameCallback
and PasswordCallback
.
DefaultLoginModule
was using the login module options to retrieve the
member’s Config
object. Now, custom Callback
types have been
implemented which can be used to retrieve additional data required for
the authentication.
List of the supported Callback
s in Hazelcast IMDG 4.0:
-
javax.security.auth.callback.NameCallback
-
javax.security.auth.callback.PasswordCallback
-
com.hazelcast.security.CredentialsCallback
(provides access to the incomingCredentials
instance) -
com.hazelcast.security.EndpointCallback
(allows retrieving the remote host address, it’s a replacement forCredentials.getEndpoint()
in Hazelcast IMDG 3.12.x) -
com.hazelcast.security.ConfigCallback
(allows retrieving member’sConfig
object) -
com.hazelcast.security.SerializationServiceCallback
(provides access to HazelcastSerializationService
) -
com.hazelcast.security.ClusterNameCallback
(provides access to Hazelcast cluster name sent by the connecting party)
Renaming Quorum as Split Brain Protection
Both in the API/code samples and documentation, the term "quorum" has been replaced by "split-brain protection".
With this change, the following configuration parameters have been renamed:
Declarative configuration elements:
-
quorum
→split-brain-protection
-
quorum-size
→minimum-cluster-size
-
quorum-ref
→split-brain-protection-ref
-
quorum-type
→protect-on
-
probabilistic-quorum
→probabilistic-split-brain-protection
-
recently-active-quorum
→recently-active-split-brain-protection
-
quorum-function-class-name
→split-brain-protection-function-class-name
-
quorum-listeners
→split-brain-protection-listeners
Programmatic configuration objects and methods:
-
QuorumConfig
→SplitBrainProtectionConfig
-
QuorumConfig.setSize()
→SplitBrainProtectionConfig.setMinimumClusterSize()
-
QuorumConfig.setType()
→SplitBrainProtectionConfig.setProtectOn()
-
QuorumListenerConfig
→SplitBrainProtectionListenerConfig
-
QuorumEvent
→SplitBrainProtectionEvent
-
QuorumService
→SplitBrainProtectionService
-
QuorumService.getQuorum()
→SplitBrainProtectionService.getSplitBrainProtection()
-
isPresent()
→hasMinimumSize()
-
setQuorumName()
→setSplitBrainProtectionName()
-
addQuorumConfig()
→addSplitBrainProtectionConfig()
-
newProbabilisticQuorumConfigBuilder()
→newProbabilisticSplitBrainProtectionConfigBuilder()
-
newRecentlyActiveQuorumConfigBuilder()
→newRecentlyActiveSplitBrainProtectionConfigBuilder()
See the following table for a before/after sample.
3.12.x |
5.0 |
|
|
See the Split-Brain Protection section for more information on network partitioning.
Renaming getID to getClassId in IdentifiedDataSerializable
The getId()
method of the IdentifiedDataSerializable interface
is a method with a common name, meaning a naming conflict would happen frequently.
For example, database entities also have a getId()
method.
Therefore, it has been renamed as getClassId()
.
See the following table showing the interface code before and 5.0.
3.12.x |
5.0 |
|
|
See here for more information
on IdentifiedDataSerializable
.
Introducing Lambda Friendly Interfaces
Entry Processor
The EntryBackupProcessor
interface has been removed in favor
of EntryProcessor which now defines how the entries will be processed
both on the primary and the backup replicas.
Because of this, the AbstractEntryProcessor
interface has been removed.
This should make writing entry processors more lambda friendly.
3.12.x |
5.0 |
|
|
This should cover most cases. If you need to define a custom
backup entry processor, you can override the EntryProcessor#getBackupProcessor
method.
map.executeOnKey(key, new EntryProcessor<Object, Object, Object>() {
@Override
public Object process(Entry<Object, Object> entry) {
// process primary entry
}
private Object processBackupEntry(Entry<Object, Object> backupEntry) {
// process backup entry
}
@Nullable
@Override
public EntryProcessor<Object, Object, Object> getBackupProcessor() {
return this::processBackupEntry;
}
});
Functional and Serializable Interfaces
Introduces interfaces with single abstract method which declares a
checked exception. The interfaces are also Serializable
and can be
readily used when providing a lambda which is then serialized.
The Projection class was an abstract interface for historical reasons. It has been turned into a functional interface so it’s more lambda-friendly.
See the following table for the before/after sample implementations.
3.12.x |
5.0 |
|
|
Expanding Nullable/Nonnull Annotations
The APIs of the distributed data structures have been made cleaner
by adding Nullable
and Nonnull
annotations, and
their API documentation have been improved:
-
Now, it is obvious when looking at the API where
null
is allowed and where it is not. -
Some methods were throwing
NullPointerException
while others were throwingIllegalArgumentException
. Now the behavior is aligned and an unexpectednull
argument results in aNullPointerException
being thrown. -
Some methods actually allowed
null
but there was no indication that they did. -
A method when used on the member would accept
null
and have some behavior accordingly while, on the client, the method would throw aNullPointerException
. Now, the behavior of the member and client have been aligned.
The data structures and interfaces enhanced in this sense are listed below:
-
IQueue
,ISet
,IList
-
IMap
,MultiMap
,ReplicatedMap
-
Cluster
-
ITopic
-
Ringbuffer
-
ScheduledExecutor
Removal of ICompletableFuture
In Hazelcast IMDG 3.12.x series, com.hazelcast.core.ICompletableFuture
was
introduced to enable reactive programming style. ICompletableFuture
was
intended as a temporary, JDK 6 compatible replacement for java.util.concurrent.CompletableFuture
that was introduced in Java 8. Since Hazelcast 4.0 requires Java 8, the user-facing
asynchronous Hazelcast API methods now have their return type changed from
ICompletableFuture
to Java 8’s java.util.concurrent.CompletionStage.
Dependent computation stages registered using default async methods which do not
accept an explicit Executor
argument (such as thenAcceptAsync
, whenCompleteAsync
etc)
are executed by the java.util.concurrent.ForkJoinPool#commonPool()
(unless it does not
support a parallelism level of at least two, in which case, a new Thread
is created to
run each task).
See the following table for the before/after samples.
3.12.x |
5.0 |
|
|
WAN Replication Configuration Changes
Previously, Configuring WAN replication was problematic:
-
You needed to specify the fully qualified class name of the WAN implementation that should be used. In most cases, this was the built-in Hazelcast IMDG Enterprise Edition (EE) implementation.
-
There were various configuration options, some of which were present as Java class instance fields or XML child nodes and attributes while others were present in a properties list. The issue with the property list is that there was no checking for typos, no documentation and no IDE help.
-
If you wanted to use a custom WAN publisher SPI implementation, some configuration options did not make sense as they were tied to our implementation, e.g., WAN queue size.
-
It was verbose.
The tag which was supposed to cover both cases, using the built-in Hazelcast EE implementation and a
custom WAN replication implementation (wan-publisher
or WanPublisherConfig
), has been separated into
two configuration elements/classes to be used for built-in and custom WAN publishers:
-
batch-publisher
(declarative configuration) orWanBatchPublisherConfig
(programmatic configuration) -
custom-publisher
(declarative configuration) orWanCustomPublisherConfig
(programmatic configuration)
This means, if you’re using the Hazelcast built-in WAN replication, the new configuration element
is batch-publisher
or WanBatchPublisherConfig
.
If you’re using a custom WAN replication implementation, the new configuration element is
custom-publisher
or WanCustomPublisherConfig
.
Additionally, the group password has been removed from the configuration and now only the cluster name is checked when connecting to the target cluster. This has been done to align the behavior with members forming a single cluster, where members with different passwords but with the same cluster name (previously group name) could form a cluster.
See the following table for the before/after built-in WAN publisher examples:
3.12.x |
5.0 |
Declarative Configuration |
|
|
|
Programmatic Configuration |
|
|
|
See the following table for the before/after custom WAN publisher examples:
3.12.x |
5.0 |
Declarative Configuration |
|
|
|
Programmatic Configuration |
|
|
|
See WAN Replication for more information.
WAN Replication SPI Changes
In IMDG 3.12.x series, the WAN publisher SPI allowed you to plug into the lifecycle of a map/cache entry and replicate the updates to another system. For example, you might implement replication to Kafka or some JMS queue or even write out map and cache event changes to a log on disk. The SPI was not very intuitive though:
-
It was not clear which interface needed to be implemented (
WanPublisher
vs.WanReplicationEndpoint
). -
You had to implement different interfaces, depending on whether you were using Hazelcast IMDG Open Source or Enterprise edition.
-
There were cases of leaking internals which don’t make sense for some custom implementations.
-
There were unused methods in the public SPI.
We have provided a new and cleaner WAN publisher SPI after 3.12.x. You only need to
implement a single interface: com.hazelcast.wan.WanPublisher
. This implementation can
then be set in the WAN replication configuration and be used with both Hazelcast Open Source and
Enterprise editions.
Predicate API Cleanups
The following refactors and cleanups have been performed on the public Predicate related API:
-
Moved the following classes from the
com.hazelcast.query
package tocom.hazelcast.query.impl.predicates
:-
IndexAwarePredicate
-
VisitablePredicate
-
SqlPredicate/Parser
-
TruePredicate
-
-
Moved the
FalsePredicate
andSkipIndexPredicate
classes to thecom.hazelcast.query.impl.predicates
package. -
Converted
PagingPredicate
andPartitionPredicate
to interfaces and addedPagingPredicateImpl
andPartitionPredicateImpl
to thecom.hazelcast.query.impl.predicate
package. -
Converted
PredicateBuilder
andEntryObject
to interfaces (and madeEntryObject
a nested interface inPredicateBuilder
) and addedPredicateBuilderImpl
to thecom.hazelcast.query.impl.predicates
package. -
The public API classes/interfaces no longer extend
IndexAwarePredicate
/VisitablePredicate
; this dependency has been moved to theimpl
classes. -
Introduced the new factory methods in
Predicates
:-
newPredicateBuilder()
-
sql()
-
pagingPredicate()
-
partitionPredicate()
-
Consequently, the public Predicate API now provides only interfaces (Predicate
,
PagingPredicate
and PartitionPredicate
) with no dependencies on any internal APIs.
See the Predicates API section for more information on predicates.
Changing the UUID String Type to UUID
Some public APIs that return UUID strings have been changed to return UUID.
These changes include getUuid()
method of the Endpoint
interface,
getTxnId()
method of the TransactionContext
interface,
return values of the listener registrations and registrationId
parameters for the methods
that de-register the listeners.
See the following table for the before/after sample implementations.
3.12.x |
5.0 |
|
|
Removal of Deprecated Concurrency API Implementations
After introduction of CP Subsystem in Hazelcast IMDG 3.12,
legacy implementations of the distributed concurrency APIs, e.g., ILock
and IAtomicLong
,
had been deprecated.
In IMDG 4.0 and afterwards, these deprecated implementations and additionally
ILock
and ICondition
interfaces are completely removed.
After Hazelcast IMDG 3.12, CP Subsystem has received an unsafe operation mode which provides weaker consistency guarantees similar to former implementations in Hazelcast IMDG 3.x series.
For more information, see the CP Subsystem section.
See the following table for the before/after samples.
3.12.x |
5.0 |
|
|
Removal of Legacy Merge Policies
All legacy merge policies have been removed. Replacements of
legacies are under the com.hazelcast.spi.merge
package.
These are the replacements for IMap and ICache:
Removed IMap Merge Policies and Their Replacements
-
com.hazelcast.map.merge.HigherHitsMapMergePolicy
→com.hazelcast.spi.merge.HigherHitsMergePolicy
-
com.hazelcast.map.merge.LatestUpdateMapMergePolicy
→com.hazelcast.spi.merge.LatestUpdateMergePolicy
-
com.hazelcast.map.merge.PassThroughMergePolicy
→com.hazelcast.spi.merge.PassThroughMergePolicy
-
com.hazelcast.map.merge.PutIfAbsentMapMergePolicy
→com.hazelcast.spi.merge.PutIfAbsentMergePolicy
Removed ICache Merge Policies and Their Replacements
-
com.hazelcast.cache.merge.HigherHitsCacheMergePolicy
→com.hazelcast.spi.merge.HigherHitsMergePolicy
-
com.hazelcast.cache.merge.LatestAccessCacheMergePolicy
→com.hazelcast.spi.merge.LatestAccessMergePolicy
-
com.hazelcast.cache.merge.PassThroughCacheMergePolicy
→com.hazelcast.spi.merge.PassThroughMergePolicy
-
com.hazelcast.cache.merge.PutIfAbsentCacheMergePolicy
→com.hazelcast.spi.merge.PutIfAbsentMergePolicy
Moreover, the setMergePolicy/getMergePolicy
methods have been
removed from MapConfig
, ReplicatedMapConfig
and CacheConfig
.
They have been replaced by the setMergePolicyConfig/getMergePolicyConfig
methods.
The merge-policy
declarative configuration element that
has been used in the older IMDG versions still can be used:
<merge-policy batch-size="100">LatestAccessMergePolicy</merge-policy>
See here for more information on configuring merge policies.
Changes in AWS Configuration
AWS programmatic configuration has been merged with a more universal configuration infrastructure common to all cloud providers. The declarative configuration remains unchanged. See here for more information on configuring Hazelcast on AWS.
See the following table for the before/after samples.
3.12.x |
5.0 |
|
|
Removal of Deprecated System Properties
The following deprecated cluster properties were removed:
-
hazelcast.rest.enabled
-
hazelcast.memcache.enabled
-
hazelcast.http.healthcheck.enabled
Please see Using the REST Endpoint Groups on how to configure Hazelcast instance to expose REST endpoints. Please see Health Check and Monitoring on how to enable the health check. Please see Memcache Client on how to enable memcache client request listener service.
Removal of Deprecations in LoginModuleConfig
The following deprecated methods have been removed:
-
getImplementation()
, replaced bygetClassName()
. -
setImplementation(Object)
, replaced bysetClassName(String)
.
In declarative configuration class-name
property should be used instead.
Removal of Deprecations in MultiMapConfig
The following deprecated methods have been removed:
-
getSyncBackupCount()
, replaced bygetBackupCount()
. -
setSyncBackupCount(int)
, replaced bysetBackupCount(int)
.
In declarative configuration backup-count
property should be used instead.
See here for more information on configuring MultiMap.
Removal of Deprecations in PartitioningStrategyConfig
Misspelled setPartitionStrategy(PartitioningStrategy)
has been removed,
setPartitioningStrategy(PartitioningStrategy)
should be used instead.
See here for more information on configuring MultiMap.
Removal of Deprecations in ServiceConfig
The following deprecated methods have been removed:
-
getServiceImpl()
, replaced bygetImplementation()
. -
setServiceImpl(Object)
, replaced bysetImplementation(Object)
.
See the here
for ServiceConfig
s Javadoc.
Removal of Deprecations in TransactionContext
Deprecated getXaResource()
method has been removed. HazelcastInstance.getXAResource()
should be used instead.
See the here
for HazelcastInstance
s Javadoc.
Removal of Deprecations in DistributedObjectEvent
Deprecated getObjectId()
method has been removed, getObjectName()
should be used
instead.
See the here
for DistributedObjectEvents
s Javadoc.
Removal of Deprecated EntryListener
-based Listener API in IMap
The following set of deprecated EntryListener
-based listener API methods has been
removed:
-
addLocalEntryListener(EntryListener<K, V>)
-
addLocalEntryListener(EntryListener<K, V>, Predicate<K, V>, boolean)
-
addLocalEntryListener(EntryListener<K, V>, Predicate<K, V>, K, boolean)
-
addEntryListener(EntryListener<K, V>, boolean)
-
addEntryListener(EntryListener<K, V>, K, boolean)
The following MapListener
-based methods should be used as replacements:
-
addLocalEntryListener(MapListener)
-
addLocalEntryListener(MapListener, Predicate<K,V>, boolean)
-
addLocalEntryListener(MapListener, Predicate<K,V>, K, boolean)
-
addEntryListener(MapListener, boolean)
-
addEntryListener(MapListener, K, boolean)
EntryListener
-based listeners are still supported by the newer
MapListener
-based API and declarative configuration.
Changes in MapLoader
When trying to load map entries from a store with your MapLoader implementation (loadAll()
), Hazelcast fails to do so if any of the keys or values are null for IMDG 4.x and Platform 5.x releases.
Changes in IMap
Eviction Configuration
There has been a simplification and improvement in the way of configuring the eviction for a map.
See the following table for the before/after samples.
3.12.x |
5.0 |
|
|
Changes in IMap
Custom Eviction Policy Configuration
There has been a simplification and improvement in the way of configuring the custom eviction policy for a map.
See the following table for the before/after samples.
3.12.x |
5.0 |
|
|
Changes in EntryListenerConfig
Return type of the EntryListenerConfig.getImplementation()
method has been changed from EntryListener
to MapListener
.
See the following table for the before/after snippets.
3.12.x |
5.0 |
|
|
Changes in REST Endpoints
The following REST endpoints have been changed:
-
/hazelcast/rest/mancenter/changeurl
is removed -
All
/hazelcast/rest/mancenter/wan/*
endpoints are renamed to/hazelcast/rest/wan/
The following REST endpoints now require cluster name and password as the first two URL-encoded parameters:
-
/hazelcast/rest/wan/sync/map
-
/hazelcast/rest/wan/sync/allmaps
-
/hazelcast/rest/wan/clearWanQueues
-
/hazelcast/rest/wan/addWanConfig
-
/hazelcast/rest/wan/pausePublisher
-
/hazelcast/rest/wan/stopPublisher
-
/hazelcast/rest/wan/resumePublisher
-
/hazelcast/rest/wan/consistencyCheck/map
The output of the following endpoints has been changed to JSON:
-
/hazelcast/health/node-state
-
/hazelcast/health/cluster-state
-
/hazelcast/health/cluster-safe
-
/hazelcast/health/migration-queue-size
-
/hazelcast/health/cluster-size
-
/hazelcast/health/ready
-
/hazelcast/rest/cluster
Changes in the Diagnostics Configuration
By introducing the metrics system in Hazelcast IMDG 4.0 and afterwards, the metrics collected by Diagnostics and the metrics system is shared. This has come with the following changes of the system properties that configure diagnostics:
-
hazelcast.diagnostics.metric.level
is not available anymore. Collecting debug metrics can be enabled by setting thehazelcast.metrics.debug.enabled
orhazelcast.client.metrics.debug.enabled
system properties totrue
for the members and clients respectively. -
hazelcast.diagnostics.metric.distributed.datastructures
is not anymore available since the data structure metrics are required for the other metric consumers. Therefore, they are collected by default and no need for enabling it for the diagnostics.
Changes in the Management Center Configuration
As Management Center now uses Hazelcast Java client for communication with the cluster,
all attributes and nested elements have been removed from programmatic, XML and YAML configurations
for Management Center, i.e., from ManagementCenterConfig
class and management-center
configuration element, except for the scripting-enabled
attribute.
The default value of scripting-enabled
attribute is false
, whereas in
Hazelcast 3.x it was enabled by default for Hazelcast Open Source.
A full example of settings available in the Management Center configuration now looks like the following:
<management-center scripting-enabled="true" />
This has come with the following changes of the system properties that configure Management Center:
-
hazelcast.mc.url.change.enabled
is not available anymore.
Changes in the Event Journal Configuration
Event journal configuration had been put as a top-level configuration element. With IMDG 4.0 and afterwards, this restriction has been removed; this means event journal configuration now can be part of both map and cache configurations. This eliminates additionally specifying the map /cache names on the event journal configuration to connect it to the map/cache configurations.
See the following table for the before/after snippets.
3.12.x |
5.0 |
|
|
Deprecation of User Code Deployment
The adding of Java classes to members using User Code Deployment was deprecated in Platform 5.4.
The User Code Namespaces feature, which extends User Code Deployment to allow you to redeploy your classes, is available for Enterprise users.
Hazelcast recommends that Enterprise users migrate their user code to use User Code Namespaces. For further information on migrating from User Code Deployment to User Code Namespaces, see the Migrate from User Code Deployment topic.
Open Source users can either upgrade to Enterprise Edition, or add their resources to the Hazelcast member class paths.