Skip to end of metadata
Go to start of metadata

Work In Progress

  • This is a work in progress, please feel free to volunteer to work on this or help with it


This is an effort to provide an efficient cluster-wide cache for Sakai. The idea is to make it easy to put data into a central cache so that it can easily be tuned for the entire server and the entire cluster.

Cluster-wide caching

  • Goals
    • Single point for all long lived caches in Sakai
    • Cache which works across mutliple servers in the cluster
    • Handles expiration across the cluster automatically (but can be manually controlled)
    • Handles replication across the cluster automatically (but can be manually controlled)
    • Support Manual and Automatic discovery of other cluster nodes (configurable)
    • Don't touch the database with caching data (efficiency and scalability)
    • Add tests to the memory service and profile the performance
    • Improve the cache access interfaces so there is a single point of access which is easy for developers to use and understand
  • Why do we need a cluster wide cache?
    • The advantages of a cluster wide cache are primarily a reduction in database access and the ability to scale up the app servers. For common actions, you can see the database activity on a table increase in direct correlation with the number of app servers. This is because each app server has to load the data into the local cache (if one is even being used) or simply load it each time it is requested. With a cluster wide cache, the data is loaded from the database only once and then sent out to the other nodes. Likewise, it is removed from all caches at once when it changes.
    • More info here:
    • It can also mean greater developer efficiency since they can depend on the cache to handle object retrieval instead of worrying about trying to optimize code and making it overly complex and harder to maintain. In general, it is preferable to take advantage of caching rather than attempting to optimize code in other ways since any optimization is adding to complexity and reducing flexibility (since optimal code paths are hard to change).
  • Nice to have
    • Ability to easily wrap a caching interceptor around a service class
    • Ability to easy control whether a cache is replicated, distributed, or just local
    • Tree cache/multiple reference expiration (see the section about this)

Notes about EhCache

  • Will have to move the ehcache jar into common/lib
    • The Tomcat and RMI classloaders do not get along that well. Move ehcache.jar to $TOMCAT_HOME/common/lib. This fixes the problem. This issue happens with anything that uses RMI, not just ehcache.
    • There are lots of causes of memory leaks on redeploy. Moving ehcache and backport-util-concurrent out of the WAR and into $TOMCAT/common/lib fixes this leak.
    • This also means commons-logging and commons-collections (if used in terracota) have to go into common/lib
  • Multicast and automatic discovery
    • Multicast Blocking
      The automatic peer discovery process relies on multicast. Multicast can be blocked by routers. Virtualisation technologies like Xen and VMWare may be blocking multicast. If so enable it. You may also need to turn it on in the configuration for your network interface card.
      An easy way to tell if your mutlicast is getting through is to use the ehcache remote debugger and watch for the heartbeat packets to arrive.
    • Multicast Not Progagating Far Enough or Propagating Too Far
      You can control how far the multicast packets propagate by setting the badly misnamed time to live. Using the multicast IP protocol, the timeToLive value indicates the scope or range in which a packet may be forwarded. By convention:
      0 is restricted to the same host
      1 is restricted to the same subnet
      32 is restricted to the same site
      64 is restricted to the same region
      128 is restricted to the same continent
      255 is unrestricted
      The default value in Java is 1, which propagates to the same subnet. Change the timeToLive property to restrict or expand propagation.
    • RMICachePeer may fail to start if there are spaces in the tomcat path (remove all spaces from the tomcat path)
  • Default delivery mechanism for ehcache is RMI, this would seem to perform more poorly than some of the other lighter weight options like JXTA or JGroups but this needs to be tested
    • Some users have reported that enabling distributed caching causes a full GC each minute. This is an issue with RMI generally, which can be worked around by increasing the interval for garbage collection. The effect that RMI is having is similar to a user application calling System.gc() each minute. In the settings above this is disabled, but it does not disable the full GC initiated by RMI.
      The default in JDK6 was increased to 1 hour. The following system properties control the interval.
      See for the bug report and detailed instructions on workarounds.
    • Some users have reported that enabling distributed caching causes a full GC each minute. This is an issue with RMI generally, which can be worked around by increasing the interval for garbage collection. The effect that RMI is having is similar to a user application calling System.gc() each minute. In the settings above this is disabled, but it does not disable the full GC initiated by RMI.
      The default in JDK6 was increased to 1 hour. The following system properties control the interval.
      See for the bug report and detailed instructions on workarounds.
  • Some recommended GC settings
    • -XX:+DisableExplicitGC - some libs call System.gc(). This is usually a bad idea and could explain some of what we saw.
    • -XX:+UseConcMarkSweepGC - use the low pause collector
    • -XX:NewSize=1/4 of total heap size -XX:SurvivorRatio=16
  • Replication requires an object to be Serializable to replicate correctly
    • Non-serializable Objects can use all parts of ehcache except for DiskStore and replication. If an attempt is made to persist or replicate them they are discarded and a WARNING level log message emitted.
    • Elements attempted to be replicated or overflowed to disk will be removed and a warning logged if not Serializable.
  • Use TCP for reliability (instead of UDP)
  • Shutting down the cache?
    • If the JVM keeps running after you stop using ehcache, you should call CacheManager.getInstance().shutdown() so that the threads are stopped and cache memory released back to the JVM. Calling shutdown also insures that your persistent disk stores get written to disk in a consistent state and will be usable the next time they are used.
    • If the CacheManager does not get shutdown it should not be a problem. There is a shutdown hook which calls the shutdown on JVM exit.
    • Ehcache should be shutdown after use. It does have a shutdown hook, but it is best practice to shut it down in your code.
      Shutdown the singleton CacheManager: CacheManager.getInstance().shutdown();
  • Testing and checking the cache
    • Section 9 of the manual explains methods for measuring cache efficiency
  • Upgrade to version 1.4.0 of ehcache (many advantages to using the newest version)
  • Testing cache replication
    ehcache-1.x-remote-debugger.jar can be used to debug replicated cache operations. It is included in the distribution tarball for ehcache-1.2.3 and higher. It is invoked using:
    java -jar ehcache-1.x-remote-debugger.jar path_to_ehcache.xml cacheToMonitor
    It will print a configuration of the cache, including replication settings and monitor the number of elements in the cache. If you are not seeing replication in your application, run up this tool to see what is going on.
    It is a command line application, so it can easily be run from a terminal session.

Sample ehcache settings

  • Sample settings
  • Sample cache settings
    • Definition of configuration
      replicatePuts=true | false - whether new elements placed in a cache are replicated to others. Defaults to true.
      replicateUpdates=true | false - whether new elements which override an element already existing with the same key are replicated. Defaults to true.
      replicateRemovals=true - whether element removals are replicated. Defaults to true.
      replicateAsynchronously=true | false - whether replications are asyncrhonous (true) or synchronous (false). Defaults to true.
      replicateUpdatesViaCopy=true | false - whether the new elements are copied to other caches (true), or whether a remove message is sent. Defaults to true.
    • Leaving properties out defaults everything to true
  • Hibernate cache settings (recommended)

Sakai Memory Service (memory) changes and notes

  • Move ehcache jar into common/lib
    • Requires creating a new deployer in memory and adding in the module to the project base POM
    • Also requires removing the old deployer from db/shared-deployer
    • Also requires putting backport-util-concurrent into common/lib
    • NOTE: Merging will require adjustments to some other projects because of this change
  • Changed memory service from abstract to normal class
  • Move the ehcache.xml file into api/src/java/ehcache.xml
    • Would be good to have one in sakai_home override this default one (might be a pipe dream though)
  • Remove dependency on EventTrackingService
    • This is because we want to get rid of event based cache invalidation
  • Remove use of ComponentManager static (makes testing impossible since we cannot simulate the entire CM)
    • Replaced this with use of application context to attempt to load cache beans by name
  • Switch from using lookup-method to setter injection
    • This revealed an issue with a circular dependency which was handled using spring lazy init
    • This also allows better ability to run tests
  • Remove the explicit garbage collection
    • This is recommended for ehcache, here is the current code
  • Remove the use of multirefcache (only used in security service currently)
    • Deprecate the method for making a multirefcache (newMultiRefCache(String cacheName)(wink)
    • Cause methods that build the MRC to notify that it is deprecated
    • Fix the security service to simply invalidate its own entries (invalidation will propogate)
    • Then destroy the MRC and all related methods from memory service
    • NOTE: Merging will require adjustments to some other projects because of this change
  • Fix the CacheRefresher so it works outside of MRC
    • the refresh method of the refresher is not being called anymore UNLESS multi ref cache is being used (through getPayload() in MemCache (innerclass)). If we want to continue supporting the refresher (outside MRC) then I think it needs to be called in the various methods of MemCache as well
    • The code has now been updated to support it
  • Switch all keys over from Object to String (Ian Boston suggestion)
    • This requires changes (replace Object with String) to the memory interfaces and also changes to the NotificationCache (event-impl) and SiteCacheImpl (site-impl)
    • NOTE: Merging will require adjustments to some other projects because of this change
  • Deprecate the use of the "pattern" argument
    • This was only used to filter out event messages but we are getting rid of event based cache cleanup so this is not needed
  • Fix up API documentation so it all makes sense and is more accurate and understandable
    • Will be running this by a junior developer to ensure it is clear
  • Questions
    • What is org.sakaiproject.memory.MemoryService.mref_map for?
      • This is the secondary cache for keeping track of all the references related to the multiref cache
    • What is org.hibernate.cache.UpdateTimestampsCache doing?
      • This is one of the hibernate caches, recommend smaller default settings (from the ehcache and hibernate docs)
    • What is org.hibernate.cache.StandardQueryCache
      • This is the primary hibernate cache, recommend larger default settings (from the ehcache and hibernate docs)
    • Why is there a separate cache for the UDS? (org.sakaiproject.user.api.UserDirectoryService)


  • Test out the cache using cluster wide invalidation and also cluster wide replication using RMI
  • Setup tests to profile and unit test where appropriate
    • (tick) Write unit tests to check the operation of the current APIs and validate the contract
    • Write runtime tests to get a baseline testing of the memory service working
    • Switch on distributed caching
    • Execute tests using a single node
    • Execute tests using two nodes
    • Get someone else to run the tests
  • Look at possibility of replacing RMI version with jGroups or some other centralized method which let's us control the server discovery or server definitions
    • This might wait until later because it may be really really hard
  • Run this by a junior developer to ensure it is clearly documented
  • Merge the branch into the trunk
  • NOTE: Record of test runs is available

Multiple reference and tree caching addition

  • Ian B. has added some APIs and implementations to the branch to support multiple reference caching
    • Here are his notes about it which explain pretty clearly what it does
      It is a simplified multiple reference cache.
      Objects on the cache can either have forward or reverse dependencies.
      When an item is added its dependencies are recorded, when an item is removed the dependencies are removed.
      when an item is re-put, the dependencies are removed and the object updated and the dependencies updated.
      Any cluster implementation of this interfae will automatically perform
      cache operations over the whole cache, the consuming service should not
      have to and should not perform any cluster wide invalidation, only
      concerning itself with its own invalidations.
  • The files that were added are as follows
    • APIS
      • Some exception classes
    • IMPLS
      • net.sf.ehcache.distribution.* (fork of some parts of ehcache)
      • (util class)
  • This has not added any dependencies or spring beans
  • The forked ehcache code is there to allow the replication to be turned off temporarily and then turned back on
  • No labels


  1. Can you provide some background for the choice of distributed EhCache over Terracotta? Is it a matter of licensing?

    1. EhCache seemed to meet the requirements we were going for and is a mature product. It also works well with hibernate. I don't see any reason the APIs, as they exist in the the branch currently, could not support the use of EhCache with Terracotta (or virtually any cache). It looked like Terracotta required running another server though and that seems to break with the "easy to install" goal (I don't know that much about terracotta though).

      1. Thanks, Aaron, that makes sense.

        Although EhCache as a whole is fairly mature, its "distributed" functionality is younger than Terracotta. But I see the 1.3 announcement mentions that they've shaken out a lot of issues.

        Anyway, Terracotta is fascinating (and scary) in ways that go beyond improving DB query performance, so someone(s) should probably look into it separately....

        1. Terracotta has drop-in EhCache support, so you can code against EhCache and, if you want to use Terracotta to make the cache persistent and/or available across multiple servers, you can do it with a configuration file.

          As a Terracotta developer, I'm biased, but I would definitely encourage you to check it out. It gives you an arbitrarily large persistent shared heap that allows multiple JVMs to interact with each other transparently as if the threads and heap in all JVMs were in a single very large JVM. Our EhCache and Hibernate support are integration modules that we've built on top of the core technology, but you can do a lot more with it, if you want.

          It does have a stand-alone server, but you get a lot of power and control from it. Plus, since it's declarative and transparent, you can run without it if you don't want to run clustered.

          Feel free to come by our forums or IRC channel (#terracotta at or use our mailing lists if you want to find out more.

  2. Ian, regarding the MultipleReferenceCache work, I'm worried about committing Sakai to an EhCache fork. Ideally that aspect could be decoupled from the rest of the branch (whose goals seem pretty straightforward) and treated as a submission to the EhCache project. Does Sakai have an immediate need to turn replication on and off on a per-thread basis? If so, how do other EhCache clients work around it?

    1. I am not entirely certain what the 'EhCache fork' is. But here is some clarification that might answer the question.

      Under the Memory Service API we have used ehcache to perform he caching operations rather than ConcurrentHashMaps.

      The memory Service has a multi-ref cache implementation that is a 'cache entry is dependent on list' multi ref cache implementation.

      I implemented another 2 classes that provided 'cache entry is dependent on' and 'other cache entries depend on this cache entry' multi-ref caches. The API's for these are not final or necessarilly clean, and do not bind the API to ehcache at all, which is all in the implementation under the API.

      Perhapse they dont need to be there at all, I am not too bothered.

      Just so I have thinkgs clear, can you explain in more detail your concerns about 'committing Sakai to an EhCache fork' ? Are you concerned about the use of ehcache and the level of that Sakai is making to it ?

      1. My concerns were raised by the "Multiple reference and tree caching addition" section on this very Confluence page. (smile)

        The section's use of the word "fork" may be misleading, given what I actually find in your directory: basically two classes copied and renamed from ehcache so that you can wrap thread-specific disabling around "final" public methods. It would be nice to get them out of the ehcache package, and it would be nice to reduce the amount of copied code somehow (maybe by using delegation rather than subclassing?), but maintaining two modified classes isn't as big a maintenance headache as maintaining a fork of the entire ehcache tree.