In case you have big values or non-serializable values and you need a replicated cache to at least invalidate the data when it is needed, you can use the invalidation mode that will work on top of any replicated cache implementations. This is possible thanks to the class org.exoplatform.services.cache.invalidation.InvalidationExoCache which is actually a decorator whose idea is to replicate the hash code of the value in order to know if it is needed or not to invalidate the local data. If the new hash code of the value is the same as the old value, we don't invalidate the old value. This is required to avoid the following infinite loop that we will face with invalidation mode proposed out of the box by JBoss Cache, as in the following example:
Cluster node #1 puts (key1, value1) into the cache.
On cluster node #2 key1 is invalidated by put call in node. #1
Node #2 re-loads key1 and puts (key1, value1) into the cache.
On cluster node #1 key1 is invalidated, so we get back to step #1.
In the use case above, thanks to the InvalidationExoCache, since the value loaded at step 3 has the same hash code as the value loaded as step 1, the step 4 will not invalidate the data on the cluster node #1.
There are 2 ways to use the invalidation mode:
By configuration: simply set the parameter avoidValueReplication to true in your eXo Cache configuration, this will let the CacheService wrap your eXo Cache instance in an InvalidationExoCache in case the cache is defined as replicated or distributed.
Programmatically: You can wrap your eXo cache instance in an InvalidationExoCache yourself using the available public constructors. Please note that if you use CacheListeners, you need to add them to the InvalidationExoCache instance instead of the nested eXo Cache, because the nested eXo Cache will contain only hash codes so the related listeners will get hash codes instead of the real values.
The invalidation will be efficient if and only if the hash code method is properly implemented, in other words 2 value objects representing the same data need to return the same hash code otherwise the infinite loop described above will still happen.
If the data that you want to store into your eXo Cache instance takes a lot of time to load and/or you would like to prevent multiple concurrent loading of the same data, you can use org.exoplatform.services.cache.future.FutureExoCache on top of your eXo Cache instance in order to delegate the loading of your data to a loader that will be called only once whatever the total number of concurrent threads looking for it. See below an example of how the FutureExoCache can be used:
// Define first your loader and choose properly your context object in order
// to be able to reuse the same loader for different FutureExoCache instances
Loader<String, String, String> loader = new Loader<String, String, String>()
public String retrieve(String context, String key) throws Exception
return "Value loaded thanks to the key = '" + key + "' and the context = '" + context + "'";
// Create your FutureExoCache from your eXo cache instance and your loader
FutureExoCache<String, String, String> myFutureExoCache = new FutureExoCache<String, String, String>(loader, myExoCache);
// Get your data from your future cache instance
System.out.println(myFutureExoCache.get("my context", "foo"));