Let’s get fancy with @Configuration with Spring

Spring has changed a lot over the years to make things more flexible and convenient for developers. Annotations in Spring 3 really hit home, but recently, Spring has added features that almost completely eliminate the need to XML all together. In the past, you still needed an XML configuration file if you wanted to utilize third-party code as Spring beans but you could use annotations to demarcate your own code. With the latest Spring code, you can use a class for your configuration. Let’s see how it works.

import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.ImportResource;
import org.springframework.context.annotation.PropertySource;

public class AppConfig {

    private @Value("#{appProperties['index.location']}") String indexLocation;

    public String getIndexlocation() {

         return indexLocation;



//App.class main
ApplicationContext ctx =  new AnnotationConfigApplicationContext(AppConfig.class);

There is a lot going on here, but it may not be apparent by the small amount of code we have written. This code does the following:

  • Maps a class as the configuration for Spring
  • Loads an XML Property file (There are still some things I prefer to do in the XML)
  • Creates a String bean of type String and returns the definition of a property found in the Property file

While the property example is not necessarily useful in this example, you can see the flexibility of the properties using Spring expressions to access them. The first question you might ask is why am I still loading an XML file since the @Configuration annotation eliminates the need for it. If you declare a Bean in the class, you need to inject properties into it in most cases so this is a little extra work and on top of that you are writing some code that needs to be maintained. Using the XML declaration, you can use property substitution as parameters to an existing class and no code needs to be placed your configuration class.

So how do you determine when to put class in the XML and when to declare it as a bean? Here are my general rules:

  • If you create a class, the demarcate it with a Spring stereotype (@Component, @Service, @Repository, @Controller, @Configurable, etc.)
  • If the class is a class from a third-party jar, then place the configuration in the XML
  • If the class is from a third party but you want finer grain control over the events of instantiation and circumstances, then create the Bean using the @Bean annotation in the class containing the @Configuration annotation

Pretty simple rules to follow…

There are several other annotations that can be used in the class containing the configuration as well such as @Depends-On and @Value.

Java,.NET Caching with Spring

If you want a major performance boost out of your application, then caching should be a part of your strategy. No doubt you have experienced moments in coding where you needed to store sets of data so that you don’t have to go back to the source to get them every time. The simplest form of caching is lazy loading where you actually create the objects the first time in memory and from there on out, you access them from memory. In reality, caching gets a lot more difficult and has many considerations.

  • How do I cache in a distributed environment
  • How do I expire items in the cache
  • How do I prevent my cache from overrunning memory
  • How do I make my cache thread-safe and segment it

All of these are concerns that you will have if you “roll your own” solution to caching. Let’s just leave the heavy lifting to the Spring Framework and we can go back to concerning ourselves with solving the complex problems of our domain.

Spring has a caching mechanism/abstraction for both Java and .NET, although the Java version is far more robust. Caching in Spring is accomplished through AOP or Aspect Oriented Programming. A caching annotation (Java) or attribute (.NET) can be placed on a method or a class to indicate that it should be cached, which cache should be used and how long to keep the resources before eviction.

Java Spring Cache with EHCache

In Java, caching with Spring couldn’t be easier. Spring supports several different caching implementations but EHCache is the default and by far my favorite. EHCache is robust, configurable and handles a distributed environment with ease. Let’s look at how we can add the ability to cache to a Spring project.

<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsdhttp://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd">
  <cache:annotation-driven />

<!-- Ehcache library setup -->
<bean id="ehcache" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean" p:config-location="ehcache.xml"/>

Now that we have our cache setup, we can start to utilize it.

@Cacheable(name="records", key="recordList") //Cache the output of records, return it if already cached
public Collection findRecords(RecordId recordId) {...}

@Cacheable(name="records", key="recordsList", condition="recordType == 2") //Only cache if record type is 2
public Collection findRecords(int recordType) {...}

@CacheEvict(value = "records", allEntries=true) //Reload the cache, evict all entries
public void loadAllRecords() {...}

In the above example, we specify through the annotation that the records collection will be stored in a cache named “records” and the key to access the collection will be called “recordList”. The key parameter is optional. We also displayed an example of using Spring Expression language to process the cache conditionally. Remember that caches are defined either dynamically like above or in the ehcache.xml. For most complex caching scenarios, you will want to define the cache in the ehcache.xml with eviction and distribution rules and Spring will find it by the name parameter in the annotation.

What about .NET?

In .NET, you have a very similar mechanism to managing a cache.

<!-- Apply aspects to DAOs -->

[CacheResult("MyCache", "'Record.RecordId=' + #id", TimeToLive = "0:1:0")]
public Collection GetRecord(long RecordId)
   // implementation not shown...

Remember that with the .NET cache, you provide the caching implementation in the XML just as you do in the Java version. In this example, we have used the provided AspNetCache.

What about controlling the cache yourself, querying the cache and more complex operations? Well even that is simple by merely autowiring the cache class or retrieving it from the context.

EhcacheCacheManager cacheManager;

EHCache cache = cacheManager.getCache("records");
Collection records = cache.get("recordList");

Built-in .NET Caching

Fortunately for .NET users, there is also a built in caching framework already in .NET. This technique is used mostly in the MVC and ASP world and I am not particularly fond of it since it is specifically geared for the web side of an application. I would prefer it to be more generic like the Spring solution, but it also has some advantages such as you can configure the caches in the web.config, you can also create custom caching implementations that utilize different mechanisms for cache management. Here is a great example utilizing MongoDB. The .NET version of the cache works much the same way as the Spring one does. Here are some examples with configuration.

[OutputCache(CacheProfile="RecordList", Duration=20, VaryByParam="recordType")]
Public ActionResult GetRecordList(string recordType) {


Now the configuration in the web.config…


          <add name="RecordList"  duration="3600" />

If we are deploying our application to the cloud in Azure, we can use the AppFabric cache as demonstrated here.

Hibernate, Data Nucleus, JPA Result Caching

Another thing to keep in mind is that when you are using tools such as Hibernate, caching is built in with these solutions. There is a built-in second level cache implementation that is provided when you configure Hibernate and I tend to use EHCache for this as well. You must remember to add the Hibernate Cache annotation onto your objects at the class level that you want to cache as well. A properly setup ORM solution with Spring and Hibernate with a properly configured second-level cache is very hard to beat in performance and maintainability.


We have really done a lot with very little thanks to Spring and caching. While caching is powerful and will help improve the performance of your application, if it is overdone, it can cause problems that are difficult to diagnose. If you are caching, make sure you always understand the implications of the data and what will happen throughout the caches lifecycle.

If you are using Java, you should be using Spring

I spend a fair amount of time evangelizing the Spring Framework, and with good reason. Spring is not only a great lightweight container framework that provides IoC (Inversion of Control) and Dependency Injection, but it pretty much has a tool, component for every task that you can think of when dealing with the day to day ins and outs of programming.

You probably have a need for Spring and you don’t even know it if you haven’t used it before. Most developers at one time or another have created frameworks to accomplish tasks like remoting, JMS, MVC, database interactions, batch work, etc., so I would label Spring as “The framework” for such tasks instead of succumbing to the “roll your own” urge. I started using Spring back in 2006 and I have not, in over 50 Java projects since then, neglected to utilize it to some extent. It has reduced the amount of code I have to write, allowed me to dynamically wire dependencies together at runtime and even provided tools for tasks that I thought I was going to have to write something custom to accomplish.

Spring was born as a solution to the heavy-weight EJB and J2EE container environments. It reduces the overhead of J2EE, allows the usage of containers that are not J2EE compliant like Tomcat and Jetty and provides a consistent API that most developers these days are familiar with. Here are some example of what Spring can do:

  • Dependency Injection (e.g. create a database pool object factory in XML and inject that into objects at runtime)
  • Eliminates the need to write specific code for singleton patterns
  • Allows you to turn a POJO into a service with a mere annotation
  • With Aspects, it allows you to inject values into classes that are not managed by Spring
  • Spring has an abstraction on top of over 100 different frameworks in Java
  • Spring MVC is the most concise and robust MVC framework
  • Spring provides JPA, Hibernate and DataNucleus support and will allow transaction demarkation
  • Spring provides AOP capabilities to allow method interception and point cuts
  • Exposing POJO methods as web services is as simple as adding Apache CXF to the mix
  • Annotation support is richer than any other framework
  • Spring is the most widely used Java framework
  • Property file loading and substitution in XML

Spring is not only a Java tool, in fact, Spring.NET is available for the .NET platform. It is usually a little bit behind the Java version but it is out there.

What are these new concepts AOP, IoC and Dependency Injection?

Usually a discussion of Spring always amounts to explaining the concepts that are at the core of the framework. Let’s take a look at each of them and what they give you. IoC and Dependency Injection go hand in hand. IoC is the concept and Dependency Injection is the mechanism. For example, you create a service class on your own and now you need to manage that class by ensuring it only has one instance, you also need to get that class reference to other classes to utilize so you create a mechanism for that. Now you need transaction support so you write that in, but you also need to dynamically read in properties that are for the environment you are running in and it goes on and on. As you can see, it not only gets complicated, but that is a lot of code you are writing and maintaining yourself. Spring provides it all and through XML or annotations (preferrable the latter), you can with one simple Plain Old Java Object (POJO) accomplish all of this through Spring conventions and inject the values of the objects into your service or inject your service into any other class by simply this.

//Service class Spring Bean
public MyService implements IService {

public void doThis();


//MVC Controller Class

public MyController {

MyService myService

public Report doThat() {




In just a few lines of code, we created a singleton service that is transactional and we created a controller to call that service. There are way more complex things we could do here. For example, using the OpenSessionInView Pattern, we could get the Controller to control opening the transaction and closing it to allow for multiple service calls to use the same transactional context. We could also change the isolation level of the transaction. The point here is that we used Dependency Injection to demonstrate what IoC can do.

AOP or Aspect Oriented Programming is an advanced concept in the Spring world. AOP is merely the separation of cross-cutting concerns. The Transactional support of Spring is similar to AOP. The ability of Spring and AspectJ to allow you to inject objects into other objects that are not managed by Spring is another great example. Transactions, Security, Logging and anything other than the business at hand is a cross cutting concern. The goal of AOP is to separate those away from the code. If you didn’t use AOP, then you would have to actually control these elements your self. Take for example that you want to check that a user is valid before method calls. Without AOP, you would have to write some checkUserIsValid() method and call it at the beginning of each method. Using AOP, you could merely mark with an annotation or Aspects that each method of a certain class call another method on another class as an interceptor.

Spring is also for simple projects

You may be thinking Spring is too heavy weight for the task at hand… nonsense. I will guarantee that Spring, used properly, will reduce the amount of code in your project by at least 25%. That is 25% less code for your to maintain or write in the first place. Also, Spring provides even tools to accomplish the small tasks such as the following:

  • Finding resources on the classpath or file system (ResourceLocator)
  • Finding classes with a certain annotation
  • Generating JSON/XML from objects and vice versa (Jackson support)
  • load multiple property files and substitute variables inside of your spring XML files (Useful when promoting to different environments)
  • Ability to treat a JNDI resource as just another bean
  • Ability to treat a web service as just another bean
  • JDBCTemplate for issuing database queries and batch framework for batch operations
  • Spring Data for NOSQL support with Mongo
  • MVC Content negotiation to convert a POJO to JSON,XML, PDF (iText), Excel (POI) and more
  • Security integration that supports Windows, Active Directory, Kerberos, Basic, Digest, etc.
  • Robust Testing adapters for Junit and TestNG

I could spent a week delivering Spring training to a group of developers and only scratch the surface of what is there. Without a doubt though, when I have a tough problem to solve or a simple Java project, I always utilize parts of Spring. Conveniently, Spring is broken up into modules so that you can only include the ones that have the functionality you need to avoid causing any project bloat.


With Spring being the #1 Java framework, I highly recommend spending some time getting familiar with it and I recommend getting some training as well from someone like myself who is an expert with the framework who can show you everything it has to offer before you start utilizing it. You can also get training directly from vmWare, the company that owns SpringSource.

Why you should be using MongoDB/GridFS and Spring Data…

I recently delved into MongoDB for the first time, and albeit I was skeptical at first, I now believe it is my preference to use a NOSQL database over a traditional RDBMS. I rarely just fall in love with a new technology but the flexibility, ease of use, scalability and versatility of Mongo are good reasons to give it a chance. Here are some of the advantages of MongoDB.

  • NOSQL – A more object oriented way to access your data and no complex SQL  command to learn or remember
  • File Storage – Mongo is a master of storing flat files. Relational databases have never been good at this.
  • No DBA – The requirement of database administration in greatly minimized with NOSQL solutions
  • No schema, complex structures or normalization. This can be a good thing and also bad. Inevitably everyone has worked on a project that has been over normalized and hated it.
  • No complex join logic

Spring Data for Mongo

My first stop when coding against Mongo was to figure out how Spring supported it and without fail, I was not disappointed. Spring Data provides a MongoTemplate and a GridFSTemplate for dealing with Mongo. GridFs is the Mongo file storage mechanism that allows you to store whole files into Mongo. The Mongo NOSQL database utilizes a JSON-like object storage technique and GridFS uses BSON (Binary JSON) to store file data.

As the name implies, a NOSQL database doesn’t use any SQL statements for data manipulation, but it does have a robust mechanism to accomplish the same ends. Before we start interacting with Mongo, let’s look at some of the components I used to accomplish the examples I am going to show you.

  • Spring 3.1.0.RELEASE
  • Spring Data for MongoDB 1.1.0.M2
  • Mongo Java Driver 2.8.0
  • AspectJ (Optional) 1.7.0
  • Maven (Optional) LATEST

The very first thing we need to configure is our context.xml file. I always start a project with one of these but I use Spring annotations as much as possible to keep the file clean.

 <?xml version="1.0"?>
<beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

	<!-- Connection to MongoDB server -->
	<mongo:db-factory host="localhost" port="27017"
		dbname="MongoSpring" />
	<mongo:mapping-converter id="converter"
		db-factory-ref="mongoDbFactory" />

	<!-- MongoDB GridFS Template -->
	<bean id="gridTemplate" class="org.springframework.data.mongodb.gridfs.GridFsTemplate">
		<constructor-arg ref="mongoDbFactory" />
		<constructor-arg ref="converter" />

	<mongo:mongo host="localhost" port="27017" />

	<bean id="mongoTemplate" class="org.springframework.data.mongodb.core.MongoTemplate">
		<constructor-arg ref="mongoDbFactory" />


	<context:annotation-config />
        <context:component-scan base-package="com.doozer" />
	<context:spring-configured />


In short, the context file is setting up a few things.

  • The database factory that the templates will use to get a connection
  • The MongoTemplate and GridFSTemplate
  • Annotation support
  • Annotation @Configuration support if needed (Optional)

Let’s take a look at my App class that is the main entry point for this Java application.

public class App

public MongoOperations mongoOperation;
public StorageService storageService;

ApplicationContext ctx;
public App() {

ctx = new GenericXmlApplicationContext("mongo-config.xml");

I am using AspectJ to weave my dependencies at inject them at compile or load time. If you are not using AspectJ, you need to lookup the MongoOperation and StorageService from the Context itself. The Storage Service is a simple @Service bean that provides an abstraction on top of the GridFsTemplate.


public class StorageServiceImpl implements StorageService {

private GridFsOperations gridOperation;

public String save(InputStream inputStream, String contentType, String filename) {

DBObject metaData = new BasicDBObject();
metaData.put("meta1", filename);
metaData.put("meta2", contentType);

GridFSFile file = gridOperation.store(inputStream, filename, metaData);

return file.getId().toString();

public GridFSDBFile get(String id) {

System.out.println("Finding by ID: " + id);
return gridOperation.findOne(new Query(Criteria.where("_id").is(new ObjectId(id))));

public List listFiles() {

return gridOperation.find(null);

public GridFSDBFile getByFilename(String filename) {
return gridOperation.findOne(new Query(Criteria.where("filename").is(filename)));


Our StorageServiceImpl is merely making calls to the GridOperations object and simplifying calls. This class is not strictly necessary since you can inject the GridOperations object into any class, but if you are planning on keeping a good separation to be able to extract Mongo/GridFS later to go with something else, this makes sense.

Mongo Template

Now, we are ready to interact with Mongo. First lets deal with creating and saving some textual data. The operations below show a few examples of interacting with data from the Mongo database by using the MongoTemplate.

User user = new User("1", "Joe", "Coffee", 30);
User savedUser = mongoOperation.findOne(new Query(Criteria.where("id").is("1")), User.class);
System.out.println("savedUser : " + savedUser);
mongoOperation.updateFirst(new Query(Criteria.where("firstname").is("Joe")),
Update.update("lastname", "Java"), User.class);
User updatedUser = mongoOperation.findOne(new Query(Criteria.where("id").is("1")), User.class);
System.out.println("updatedUser : " + updatedUser);
// mongoOperation.remove(
//      new Query(Criteria.where("id").is("1")),
//  User.class);
List<User> listUser =
System.out.println("Number of user = " + listUser.size());

As you can see, it is fairly easy to interact with Mongo using Spring and a simple User object. The user object is just a POJO as well with no special annotations. Now, let’s interact with the files using our StorageService abstraction over GridFs.

//StorageService storageService = (StorageService)ctx.getBean("storageService"); //if not using AspectJ Weaving
String id = storageService.save(App.class.getClassLoader().getResourceAsStream("test.doc"), "doc", "test.doc");
GridFSDBFile file1 = storageService.get(id);
GridFSDBFile file = storageService.getByFilename("test.doc");
List files = storageService.listFiles();

for (GridFSDBFile file2: files) {

The great thing about Mongo is that you can store metadata about the file itself. Let’s look at the output of our file as printed by the code above.

{ "_id" : { "$oid" : "502a61f6c2e662074ea64e52"} , "chunkSize" : 262144 , "length" : 1627645 , "md5" : "da5cb016718d5366d29925fa6a2bd350" , "filename" : "test.doc" , "contentType" : null , "uploadDate" : { "$date" : "2012-08-14T14:34:30.071Z"} , "aliases" : null , "metadata" : { "meta1" : "test.doc" , "meta2" : "doc"}}

Using Mongo, you can associate any metadata with your file you wish and retrieve the file by that data at a later time. Spring support for GridFS is in its infancy, but I fully expect it to only grow as all Spring projects do.

Query Metadata

The power of Mongo also lies in the metadata concepts that I mentioned earlier and relational databases just don’t have this concept. Mongo stored implicit metadata about the files and it also allowed me to attach any data I wish onto a metadata layer. You can query this data in the same fashion you would query Mongo directly by using the . notation.

gridOperation.findOne(new Query(Criteria.where("metadata.meta1").is("test.doc")));

Map Reduce

Mongo offers MapReduce, a powerful searching algorithm for batch processing and aggregations that is somewhat similar to SQL’s group by. The MapReduce algorithm breaks a big task into two smaller steps. The map function is designed to take a large input and divide it into smaller pieces, then hand that data off to a reduce function, which distills the individual answers from the map function into one final output. This can be quite a challenge to get your head around when you first look at it as it requires embedding scripting. I highly recommend reading the Spring Data for Mongo documentation regarding Map Reduce before attempting writing any map reduce code.

Full-Text Search

MongoDB has no inherent mechanisms to be able to search the text stored in the GridFS files, however, this isn’t a unique limitation as most relational databases also have problems with this or require very expensive addons to get this functionality. There are a few mechanisms that could be used as a start to writing this type of mechanism if you are using the Java language. The first would be to just simply take the text and attach it as metadata on the file object. That is a really messy solution and screams of inefficiency, but for smaller files is a possibility. A more ideal solution would be to use Lucene and create an searchable index of the file content and store that index along with the files.

Scaling with Sharding

While very difficult to say in mixed company, Sharding describes MongoDB’s ability to scale horizontally automatically. Some of the benefits of this process as described by the Mongo web site are:

  • Automatic balancing for changes in load and data distribution
  • Easy addition of new machines without down time
  • Scaling to one thousand nodes
  • No single points of failure
  • Automatic failover


  • One to 1000 shards. Shards are partitions of data. Each shard consists of one or more mongod processes which store the data for that shard. When multiple mongod‘s are in a single shard, they are each storing the same data – that is, they are replicating to each other.
  • Either one or three config server processes. For production systems use three.
  • One or more mongos routing processes.

For testing purposes, it’s possible to start all the required processes on a single server, whereas in a production situation, a number of server configurations are possible.

Once the shards (mongod‘s), config servers, and mongos processes are running, configuration is simply a matter of issuing a series of commands to establish the various shards as being part of the cluster. Once the cluster has been established, you can begin sharding individual collections.

Import, Export and Backup

Getting data in and out of Mongo is very simple and straight forward. Mongo has the following commands that allow you to accomplish these tasks:

  • mongoimport
  • mongoexport
  • mongodump
  • mongorestore

You can even delve into the data at hand to export pieces and parts of collections by specifying them in the commands and mixing in . notation or you can choose to dump data by using a query.

$ ./mongodump --db blog --collection posts --out - > blogposts.bson

$ ./mongodump --db blog --collection posts
    -q '{"created_at" : { "$gte" : {"$date" : 1293868800000},
                          "$lt"  : {"$date" : 1296460800000}

Mongodump even takes an argument –oplog to get point in time backups. Mongo’s backup and restoration utilities are as robust as any relational database.

Limitations of MongoDB

Mongo has a few limitations. In some ways, a few of these limitations can be seen as benefits as well.

  • No Joining across collections
  • No transactional support
  • No referential integrity support
  • No full text search for GridFS files built in
  • Traditional SQL-driven reporting tools like Crystal Reports and business intelligence tools are useless with Mongo


The advantages of MongoDB as a database far outweigh the disadvantages. I would recommend a Mongo NOSQL database for any project regardless of what the programming language you are using. Mongo has drivers for everything. I do however think that if you are in a certain scenarios where you are dealing with rapid, realtime OLTP transactions, MongoDB may fall short of competing with a high performance RDBMS such as Oracle, for example. For the average IT project, I believe Mongo is well-suited. If you still aren’t sold on Mongo by now, (I would be pretty shocked if you weren’t), then feast your eyes on the high-profile sites that are using MongoDB as their backend database today.

  • FourSquare
  • Bit.ly
  • github
  • Eventbrite
  • Grooveshark
  • Craigslist
  • Intuit

The list goes on and on… There are also several other NOSQL solutions out there that enjoy popularity.

  • CouchDB
  • RavenDB
  • CouchBase

Optional Components

I used several optional components for my exercises. I wanted to address these for the folks who may not be familiar with them.

AspectJ and @Configurable

Many folks would ask why I chose to use Aspect Weaving instead of just looking up the objects from the context in the App object. @Configurable allows you to use the @Autowired annotation on a class that is not managed by the Spring context. This process requires load-time or compile-time weaving to work. For the purposes of Eclipse, I use the ADJT plugin and for Maven, I use the AspectJ plugin to achieve this. The weaving process just looks for certain aspects and then weaves the dependencies into the byte code. It does solve a lot of chicken and egg problems when dealing with Spring.


If you are using Maven and you want all of the dependencies I used for the examples, here is the pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">




		<!-- Spring framework -->




		<!-- mongodb java driver -->








Stop using XML files with Spring Hibernate already

Hibernate has had annotations available for configuration for several years now and Spring hasn’t been too far behind it. I think the last time I actually used a hibernate mapping or hbm file was back in 2006 or so and as far as Spring goes, I still use the XML but only for classes I don’t control myself or when I need to do something like inject values into something that an annotation wouldn’t support.

Annotations simplify your coding, make it self documenting and they provide compile-time checking. Annotations also add some flexibility over the XML configuration as well. I can’t tell you how many people I see who are just starting to use Hibernate or Spring and they just do everything with XML files and then they complain when they have to refactor and have to edit the XML or they complain because they have all this XML to manage.

Another complaint I hear is that some developers say they don’t want to make their model depend on Hibernate or the Java Validation Framework. The answer to that is that you will only have to depend on having the runtime annotations on the classpath. Annotations cause no problems in code that isn’t looking for them. I have annotated a domain model before with Hibernate and Java Validator Framework and passed it around in the enterprise with no problem. The callers code was only using the classes as a POJO and not using the Hibernate functionality…with no performance impact at all.

Remember:  Annotations are only useful in code that is explicitly looking for them!

Also, if you are that concerned with the non-existent problem, just create interfaces and have your annotated classes extend your interfaces. I have done this before…although it leads to fragmentation of validation code. One developer would implement the concrete classes one way using Hibernate and the other would implement it totally different. Centralized validation was one reason the JSR was created for the Java Validation Framework.

Guidelines for when to use Annotations vs. XML

  • Use Annotations – when you control the code that you are annotating

          e.g. You create a domain model and markup with Hibernate annotations and Java Validation Framework and you create a DOA and Service layer to interact with your model.

  • Use XML – when you don’t have direct control over the code you need to use in Spring or Hibernate or when you need something that the XML version provides that the annotated model does not. (This was true in earlier versions of Hibernate, but this is not the case anymore. Hibernate Annotations are quite robust as of present). You should also use XML if you plan on injecting values that an annotation wouldn’t normally support. Remember that the value of an annotation attribute cannot be modified at runtime so it must be static, so in this case, XML is ideal.

e.g. You need to create an instance of the Hibernate SessionFactory via Spring and inject a configuration

Just embrace the annotations or the rest of us who have and stop whining or we will hunt you down and make you do something like lick a toilet seat…

I won’t even go into the folks who are using Junit, DBUnit, Unitils, etc sans annotations.

Yet Another Expression Language… Spring EL

Most of the time when I hear that the term expression language I cringe, after all, we have seen so many…JSTL, JSFEL, etc and some others that will remain nameless. When I heard Spring was adding an expression language to Spring 3, I was a big dissappointed, but really I was sort of already using it and didn’t even know it with Spring Security to define my roles and accessbility rules. So I got over my dread and I took a dive into Spring EL to see how it could help make my code better.

Right off the bat, I can tell you that Spring EL will save you some coding. It gives you the ability to dynamically assign values to elements at runtime so that you aren’t stuck with a constant that you have to replace later and you won’t have to write a custom strategy class to manipulate values upon certain conditions.

Let’s look at an example I pulled straight from the Spring documentation.

public class MovieRecommender {

private String defaultLocale;

private CustomerPreferenceDao customerPreferenceDao;

public MovieRecommender(CustomerPreferenceDao customerPreferenceDao,

String defaultLocale) {
this.customerPreferenceDao = customerPreferenceDao;
this.defaultLocale = defaultLocale;

// ...

If you know anything about me, it’s that I embraced convention-based configuration and I intend to largely ignore the old XML style of defining beans. In this example, we see that the value of the user’s country is injected into the defaultLocale when the method is called.

public void create(Contact contact);

This is an example I have used before. Using Spring Security, you can define what permissions a user much have to access certain urls in your application or secure a method. 

You can pretty much use Spring EL anywhere you would have normally put a literal in an XML file, annotation value and you can even evaluate expression directly in code. If you literally want a literal just enclose your value in single ticks like ‘Hello World’.

ExpressionParser parser = new SpelExpressionParser();
Expression exp = parser.parseExpression("put your expression here");
If you're not in a context where expressions are directly supported, you need to define your expressions 
using the JSF style #{} to get Spring to understand that you don't want a literal. 

Here is another useful example you might want to emulate. 

The calculation of the random number injected into NumberGuess is done at runtime by the expression. Normally, I would have done this by declaring another Spring Bean that is an instance of a generator and just injected that into the other class and have a method that calls the generation, but this is just much simpler.

One of my favorite points, is that you can use the Elvis operator to shorten the ternary operator syntax and not have to repeat a variable. It saves typing and since it already exists in Groovy, most will be familiar with it.


ExpressionParser parser = new SpelExpressionParser();
String name = parser.parseExpression("null?:'Unknown'").getValue(String.class);


As with anything resembling scripting or an expression language, I recommend that you use this feature sparingly. Don’t go off and litter your code with it. As a general rule of thumb, use this as a last resort for the specific circumstance where it just seems to be begging for it.

Check out the reference documentation and see how this powerful feature can help you in your coding.

Understanding the concept of Convention over Configuration

Software development has come a long way in the past 10 years. There are some great concepts that innovated and streamlined the way we develop custom software and there are other concepts that most of us would like to forget. Some of the latter things include heavy, process intensive software methodologies, proprietary software packages that don’t adhere to standards and home-rolled architecture that just doesn’t want to die. On a more positive note, the past few years have brought Test Driven Development, complexity measurements, more open standards and one of my personal favorites is convention over configuration.

Simply, convention over configuration is the ability to use coding conventions to achieve the same functionality that external configuration did before. Let me give you several examples using some of my favorite frameworks.


In Spring, convention can be used to define Spring Beans, Autowiring and many of the concepts that before required lots of XML glue. The benefit of this being that you can look at a POJO class and determine exactly what is is used for without having to go digging in an XML file for the “glue code”. Using simple annotations to demarcate Services, Utilities, DAOs, etc, eliminates the amount of XML that you have to write and maintain.


Struts is a little late to the convention game, but if you are familiar with Struts, you probably are familiar with many XML files you maintain to map your actions. The Struts convention plugin eliminates the need for all this XML by using annotations to defined your mappings.

Junit and TestNG

While testing doesn’t involve configuration so much as the previous examples, we gain another benefit here. Previously all test cases needed to extend a parent test class, but no longer. Junit, Unitils, TestNG, DBUnit have all had the ability to use a “Zero Configuration” option for some time just by annotating simple Pojo classes.

Hibernate and Java Validation

I saved the best for last here. Hibernate embraced annotations early, allowing developed to move away from hbm.xml files. Also, when you couple this with the power of the Java Validation Framework, you get a persistable domain model that has built in validation, all in once centralized location, ideal if you have multiple front ends sharing the same common codebase.

“Zero Configuration” is another term that is often used to describe a convention-driven design.

Annotations are relatively new to most developers, even thought they have been used in both Java and C# for several years now. Annotations are simple a way to self document code.

Convention-driven over a configuration-driven design has several benefits.

1. Faster to develop
2. The code is self explanatory
3. Easier to maintain, thus reducing the cost of ownership
4. It’s cleaner and easy to understand
5. Easier to refactor. (Automated refactoring tools very often butcher XML configuration files)
6. Easier to learn for newer developers. (It’s the change-resistant older-developers who usually find it difficult to make the switch)

Now that you know the benefits of a convention-driven design, you should take a look at your own frameworks and see if there is a way to accomplish it. I recommend looking heavily at Spring to start the discovery process and it has a near flawless implementation of this paradigm.