Java,.NET Caching with Spring

If you want a major performance boost out of your application, then caching should be a part of your strategy. No doubt you have experienced moments in coding where you needed to store sets of data so that you don’t have to go back to the source to get them every time. The simplest form of caching is lazy loading where you actually create the objects the first time in memory and from there on out, you access them from memory. In reality, caching gets a lot more difficult and has many considerations.

  • How do I cache in a distributed environment
  • How do I expire items in the cache
  • How do I prevent my cache from overrunning memory
  • How do I make my cache thread-safe and segment it

All of these are concerns that you will have if you “roll your own” solution to caching. Let’s just leave the heavy lifting to the Spring Framework and we can go back to concerning ourselves with solving the complex problems of our domain.

Spring has a caching mechanism/abstraction for both Java and .NET, although the Java version is far more robust. Caching in Spring is accomplished through AOP or Aspect Oriented Programming. A caching annotation (Java) or attribute (.NET) can be placed on a method or a class to indicate that it should be cached, which cache should be used and how long to keep the resources before eviction.

Java Spring Cache with EHCache

In Java, caching with Spring couldn’t be easier. Spring supports several different caching implementations but EHCache is the default and by far my favorite. EHCache is robust, configurable and handles a distributed environment with ease. Let’s look at how we can add the ability to cache to a Spring project.

<beans xmlns="" xmlns:xsi=""
  <cache:annotation-driven />

<!-- Ehcache library setup -->
<bean id="ehcache" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean" p:config-location="ehcache.xml"/>

Now that we have our cache setup, we can start to utilize it.

@Cacheable(name="records", key="recordList") //Cache the output of records, return it if already cached
public Collection findRecords(RecordId recordId) {...}

@Cacheable(name="records", key="recordsList", condition="recordType == 2") //Only cache if record type is 2
public Collection findRecords(int recordType) {...}

@CacheEvict(value = "records", allEntries=true) //Reload the cache, evict all entries
public void loadAllRecords() {...}

In the above example, we specify through the annotation that the records collection will be stored in a cache named “records” and the key to access the collection will be called “recordList”. The key parameter is optional. We also displayed an example of using Spring Expression language to process the cache conditionally. Remember that caches are defined either dynamically like above or in the ehcache.xml. For most complex caching scenarios, you will want to define the cache in the ehcache.xml with eviction and distribution rules and Spring will find it by the name parameter in the annotation.

What about .NET?

In .NET, you have a very similar mechanism to managing a cache.

<!-- Apply aspects to DAOs -->

[CacheResult("MyCache", "'Record.RecordId=' + #id", TimeToLive = "0:1:0")]
public Collection GetRecord(long RecordId)
   // implementation not shown...

Remember that with the .NET cache, you provide the caching implementation in the XML just as you do in the Java version. In this example, we have used the provided AspNetCache.

What about controlling the cache yourself, querying the cache and more complex operations? Well even that is simple by merely autowiring the cache class or retrieving it from the context.

EhcacheCacheManager cacheManager;

EHCache cache = cacheManager.getCache("records");
Collection records = cache.get("recordList");

Built-in .NET Caching

Fortunately for .NET users, there is also a built in caching framework already in .NET. This technique is used mostly in the MVC and ASP world and I am not particularly fond of it since it is specifically geared for the web side of an application. I would prefer it to be more generic like the Spring solution, but it also has some advantages such as you can configure the caches in the web.config, you can also create custom caching implementations that utilize different mechanisms for cache management. Here is a great example utilizing MongoDB. The .NET version of the cache works much the same way as the Spring one does. Here are some examples with configuration.

[OutputCache(CacheProfile="RecordList", Duration=20, VaryByParam="recordType")]
Public ActionResult GetRecordList(string recordType) {


Now the configuration in the web.config…


          <add name="RecordList"  duration="3600" />

If we are deploying our application to the cloud in Azure, we can use the AppFabric cache as demonstrated here.

Hibernate, Data Nucleus, JPA Result Caching

Another thing to keep in mind is that when you are using tools such as Hibernate, caching is built in with these solutions. There is a built-in second level cache implementation that is provided when you configure Hibernate and I tend to use EHCache for this as well. You must remember to add the Hibernate Cache annotation onto your objects at the class level that you want to cache as well. A properly setup ORM solution with Spring and Hibernate with a properly configured second-level cache is very hard to beat in performance and maintainability.


We have really done a lot with very little thanks to Spring and caching. While caching is powerful and will help improve the performance of your application, if it is overdone, it can cause problems that are difficult to diagnose. If you are caching, make sure you always understand the implications of the data and what will happen throughout the caches lifecycle.

Create a winning web services strategy with a hub

SOA Architecture and Enterprise Service Bus aren’t new concepts to IT but the drive to migrate to RESTful services over the past few years has made them more relevant than they once were. A good rule of thumb to follow to determine if you need an ESB is to analyze and visualize what your infrastructure looks like. Service bus architecture works best when your architecture looks like a wagon wheel or “hub” with spokes, multiple systems whether they be external and internal communicating with a central “brain” or repository of information.

Let’s look at some of the reasons an ESB implementation is a good idea.

  • An ESB provides a facade or interface on top of external systems that your applications need to interact with providing you with ability to replace those external systems at will without changing the in front of the ESB. Simple interface-driven design at it’s best.
  • An ESB provides a common home for your enterprise business logic, data transformations and hard core systems interactions.
  • User interfaces into the system can also be more readily shifted from one technology to another making you more agile. Also, multiple interfaces can run off the same ESB services, e.g. iPhone, iPad, Ajax framework UIs, etc.
  • An ESB gives you an external API for other business partnerships to integrate with you… Most often third-party integrations are an afterthought resulting in a reactionary measure to accommodate another system
  • An ESB encourages code reuse in your enterprise…
  • An ESB gives you a set of standards, SOAP, MTOM, REST, JSON, etc. Standards make integrations much simpler.
  • Developers are no longer accessing your databases directly with code. Control over performance is pushed back to the ESB tier.
  • An ESB is ideal in an environment where you need “translation”, multiple systems in .NET, JAVA, PHP, C++ and legacy
A lot of great reasons to use an ESB in your system architecture, but an ESB isn’t the catch all solution. It has a few drawbacks that need to be considered.
  • ESB implementations aren’t for the faint of heart. It takes some expertise to plan and execute a successful hub.
  • ESB-based systems have an increase in network chatter since all interactions are back to the ESB via the network.
  • There are not many open source solutions for ESB implementation. MuleESB is the leading one for Java. Commercially, BizTalk, webMethods and TIBCO are available but extremely expensive, but also feature rich..
  • In larger IT organizations, an ESB implementation is usually confined to a small group and other groups tend to ignore it’s existence and continue to write logic straight into their projects, creating a silo approach of small isolated projects with duplicated logic and code.
  • ESB can create a bottle neck in larger organizations that have many projects running in parallel. The ESB team has to grow to meet the demand by other groups and service their requests.
  • An ESB doesn’t make sense if you have a small number of system interactions. For example, a product company with a single product and database doesn’t need an ESB implementation. Again, it goes back to what your visualization looks like. In this example, the diagram one be a single spoke and we are looking for the wagon wheel…
ESB, HUB, SOA… whatever you want to call it, it is all about the same thing. Reducing complexity, reduce cost of ownership, increase agility and provide easy integrations to make your IT organization successful.

Avoid “Roll your own” and reduce software cost of ownership

A wise man once said that just because you can do something doesn’t mean it follows that you must do it. I can’t think of a better industry to apply this to than software. Think about the business domain that you are in for a minute… it could be healthcare, finance, content management. Now turn your attention to the software you are writing and analyze how much time and money you spend doing custom software to facilitate the business. Now think about all of the components of said software that do not apply strictly to your business domain like Logging, Auditing, Utilities, Remote communications, etc. Were these concerns that are not related directly to your business written by developers in house or were they simply acquired from a third party like open source? Chances are that if you answered the former, your cost of ownership is substantially higher than it needs to be.

When languages are in their infancy, “rolling your own” solutions to cross-cutting concerns is a necessity, but quickly the commercial software and open source communities catch up and provide solutions that can fit your needs. In the case of legacy software where, at the time, there was no choice, a custom approach is warranted. My philosophy has always been this, “Let a software company do what it’s good at… writing software, and turn your attention to your own business”. After all, if you’re a healthcare company, you’re not in the business of custom software, but more importantly, your not in the business of writing logging software, as an example.

Unfortunately, there are still IT departments that commit the cardinal sin of “reinventing the wheel” because they want total code ownership. For example, I have heard the statement, “Why would I use Jetty as an embedded HTTP server when I could roll my own in a few hours” or “Why would I use Log4J when I have a simple class that does the same thing?”. Now on the surface, the statements can seem innocent enough, but let’s look at the consequences of the decision of going this route on the organization in a timeline.

  1. Developer writes custom logging component (1 day)
  2. New requirements for logging come up that developer must implement (2 days)
  3. Bugs and maintenance of said logging component over 2 years (10 days)
  4. Developer leaves to get another jobs and another developer has to take over and learn his code (1 day)
  5. QA and Testing of this component over two years (4 days)
  6. Developer leaves again and new developer takes over (1 day)
  7. Someone comes to their senses and replaces the component with an open source component (2 days)
Total cost of owning your own logging component: 21 days over a 2 year period
Software engineers have a habit of only considering their ecosystem when writing features into software. The downstream efforts that increase cost and total cost of ownership should always be considered. Let’s look at how this could have been done differently.
  1. Developer needs a custom logging component so he chooses the open source solution Log4J or and drop into project (2 hours)
  2. New requirements for logging come up. These requirements are already supported through configuration. (0 hours)
  3. Bugs and maintenance of logging software is done by open source community and developer just updates library (.5 hours)
  4. Developer gets another job and a new developer comes in already understanding the component (0 hours)
  5. QA and Testing of the component was done by open source community (0 hours)
  6. Developer leaves again, but another developer comes in already knowing the component (0 hours)
  7. A new widely used open source framework is available that replaces said component, so component is switched out (2 hours)
Total cost of owning your own logging component: 4.5 hours over a 2 year period
By using a library maintained and developed by someone else, we saved over 20 days that could have been spent working on our core business.
I am very fond of saying that I will spend 2 days looking for an open source component that would take me only two hours to write myself. If you consider your entire ecosystem in the process of implementing a solution and the example I gave above, this statement makes a lot of sense. Consider the following… hundreds of developers devote their time and energy to a single open source solution. Only the arrogant would assume that one person could do it better. Of course, most developers can write a logging framework themselves but their efforts are better spent doing what they were hired to do and that is addressing software that is specific to your problem domain.

Good developers program in a language, talented developers code

By Chris L Hardin
Sr. Software Architect

Have you ever heard of “framework fatigue”? This term is meant to describe the creep of hundreds of third-party frameworks into development projects. Ten years ago, there wasn’t a whole lot of choice out there for Java, my current language of choice, so the average number of third-party libraries included in a project were 1-5, but today, the average has grown to around 30. You’ve got Spring, Hibernate, JUnit, Struts, Commons, TestNG, Joda, Unitils, DBUnit, iBatis just to name a few in the Java space and each of these have dependencies on other libraries and those have dependencies on others. I could rattle off another list for C#. While I don’t think that choice is a bad thing, and while I tend to use 20-30 third-party libraries in a project, I do think that there have been certain side effects of this that have been detrimental to technology. I am going to address what I think is the biggest.

Getting a Job

When did getting a job become more about knowing a specific framework and not being an expert on the Java language? I have seen managers walk over qualified resumes looking for the names of frameworks only to land on someone less qualified who decided to put a particular framework on their resume.

Kevin Rose, CEO of, said the next time he hires for a project, he is going to hire for talent rather than technology. He said that when he was hiring for a project, he looked for developers working in PHP, but after placing the individuals, he decided to branch out to other technologies and the developers he hired, weren’t able to make the transition and dare I use the term, “Think outside the box”. A talented developer may know PHP, but can easily ramp up on any other technology, whereas, a developer with merely a toolbox, may not necessarily be able to assimilate other technologies fast enough if at all.

Recruiters and managers are the worst offenders here. These folks are not necessarily technology experts so they try to cultivate a candidate that has the exact blend of frameworks that the target company is using. While this doesn’t necessarily always result in a poor hire, it does tend to exclude perfectly qualified candidates with real talent.

Let’s look at an example that I ran across recently. A manager in Denver had a requirement for a developer with Struts 2, so he excluded any candidate without Struts 2 knowledge. In reality, he could expand his search to include an older version of Struts or just MVC frameworks in general. The principles are the same, the technical details can be learned quickly. A talented candidate can take adapt and move with your enterprise. This is what Rose was trying to get across.

Ten years ago, having just the knowledge of a language or knowing one object-oriented language, could get you a job doing Java or C++, to name two of the bigger choices. Now, you have to learn and have experience with every framework imaginable just to get your resume to a hiring manager. This is why the tech sector says there are shortages in the development field of highly-qualified labor. Heaven forbid we have a shortage of Java Server Faces developers… Most of you know how I feel about JSF so you get the joke.

Java, in particular is plagued with frameworks and they change rapidly. Five years ago, it was J2EE, EJB and such APIs and Struts, then Spring, Hibernate and more lately Grails/Groovy. My point is that it is impossible to know all these frameworks and it is also impossible to know some frameworks completely. Spring, for example, is just too large for any one person to hold all the knowledge on it’s features. Even if you could learn it all, two or three new versions would be out by that time and you learned the first. The key here is familiarity and talent. A rudimentary understanding of what a framework is used for and a little research, will give you what you need to get the job done.

Here is a little secret that developers have known for years and non-technical people have yet to figure out. It doesn’t matter what language a developer knows, they are all similar. A talented developer has an interpreter and compiler in his head and thinks in pseudo-code anyway. Applying that to a language or framework is just a matter of figuring out the syntax…and that is the easy part.