Let’s get fancy with @Configuration with Spring

Spring has changed a lot over the years to make things more flexible and convenient for developers. Annotations in Spring 3 really hit home, but recently, Spring has added features that almost completely eliminate the need to XML all together. In the past, you still needed an XML configuration file if you wanted to utilize third-party code as Spring beans but you could use annotations to demarcate your own code. With the latest Spring code, you can use a class for your configuration. Let’s see how it works.

...
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.ImportResource;
import org.springframework.context.annotation.PropertySource;

@Configuration
@PropertySource("classpath:/app.properties")
@ImportResource("classpath:/mongo-config.xml")
public class AppConfig {

    private @Value("#{appProperties['index.location']}") String indexLocation;

    @Bean(name="indexLocation")
    public String getIndexlocation() {

         return indexLocation;

    }

  ...

//App.class main
ApplicationContext ctx =  new AnnotationConfigApplicationContext(AppConfig.class);

There is a lot going on here, but it may not be apparent by the small amount of code we have written. This code does the following:

  • Maps a class as the configuration for Spring
  • Loads an XML Property file (There are still some things I prefer to do in the XML)
  • Creates a String bean of type String and returns the definition of a property found in the Property file

While the property example is not necessarily useful in this example, you can see the flexibility of the properties using Spring expressions to access them. The first question you might ask is why am I still loading an XML file since the @Configuration annotation eliminates the need for it. If you declare a Bean in the class, you need to inject properties into it in most cases so this is a little extra work and on top of that you are writing some code that needs to be maintained. Using the XML declaration, you can use property substitution as parameters to an existing class and no code needs to be placed your configuration class.

So how do you determine when to put class in the XML and when to declare it as a bean? Here are my general rules:

  • If you create a class, the demarcate it with a Spring stereotype (@Component, @Service, @Repository, @Controller, @Configurable, etc.)
  • If the class is a class from a third-party jar, then place the configuration in the XML
  • If the class is from a third party but you want finer grain control over the events of instantiation and circumstances, then create the Bean using the @Bean annotation in the class containing the @Configuration annotation

Pretty simple rules to follow…

There are several other annotations that can be used in the class containing the configuration as well such as @Depends-On and @Value.

If you are using Java, you should be using Spring

I spend a fair amount of time evangelizing the Spring Framework, and with good reason. Spring is not only a great lightweight container framework that provides IoC (Inversion of Control) and Dependency Injection, but it pretty much has a tool, component for every task that you can think of when dealing with the day to day ins and outs of programming.

You probably have a need for Spring and you don’t even know it if you haven’t used it before. Most developers at one time or another have created frameworks to accomplish tasks like remoting, JMS, MVC, database interactions, batch work, etc., so I would label Spring as “The framework” for such tasks instead of succumbing to the “roll your own” urge. I started using Spring back in 2006 and I have not, in over 50 Java projects since then, neglected to utilize it to some extent. It has reduced the amount of code I have to write, allowed me to dynamically wire dependencies together at runtime and even provided tools for tasks that I thought I was going to have to write something custom to accomplish.

Spring was born as a solution to the heavy-weight EJB and J2EE container environments. It reduces the overhead of J2EE, allows the usage of containers that are not J2EE compliant like Tomcat and Jetty and provides a consistent API that most developers these days are familiar with. Here are some example of what Spring can do:

  • Dependency Injection (e.g. create a database pool object factory in XML and inject that into objects at runtime)
  • Eliminates the need to write specific code for singleton patterns
  • Allows you to turn a POJO into a service with a mere annotation
  • With Aspects, it allows you to inject values into classes that are not managed by Spring
  • Spring has an abstraction on top of over 100 different frameworks in Java
  • Spring MVC is the most concise and robust MVC framework
  • Spring provides JPA, Hibernate and DataNucleus support and will allow transaction demarkation
  • Spring provides AOP capabilities to allow method interception and point cuts
  • Exposing POJO methods as web services is as simple as adding Apache CXF to the mix
  • Annotation support is richer than any other framework
  • Spring is the most widely used Java framework
  • Property file loading and substitution in XML

Spring is not only a Java tool, in fact, Spring.NET is available for the .NET platform. It is usually a little bit behind the Java version but it is out there.

What are these new concepts AOP, IoC and Dependency Injection?

Usually a discussion of Spring always amounts to explaining the concepts that are at the core of the framework. Let’s take a look at each of them and what they give you. IoC and Dependency Injection go hand in hand. IoC is the concept and Dependency Injection is the mechanism. For example, you create a service class on your own and now you need to manage that class by ensuring it only has one instance, you also need to get that class reference to other classes to utilize so you create a mechanism for that. Now you need transaction support so you write that in, but you also need to dynamically read in properties that are for the environment you are running in and it goes on and on. As you can see, it not only gets complicated, but that is a lot of code you are writing and maintaining yourself. Spring provides it all and through XML or annotations (preferrable the latter), you can with one simple Plain Old Java Object (POJO) accomplish all of this through Spring conventions and inject the values of the objects into your service or inject your service into any other class by simply this.


//Service class Spring Bean
@Service
@Transactional
public MyService implements IService {

public void doThis();

}

//MVC Controller Class

@Controller
public MyController {

@Autowired
MyService myService

public Report doThat() {

myService.doThat();

}

}

In just a few lines of code, we created a singleton service that is transactional and we created a controller to call that service. There are way more complex things we could do here. For example, using the OpenSessionInView Pattern, we could get the Controller to control opening the transaction and closing it to allow for multiple service calls to use the same transactional context. We could also change the isolation level of the transaction. The point here is that we used Dependency Injection to demonstrate what IoC can do.

AOP or Aspect Oriented Programming is an advanced concept in the Spring world. AOP is merely the separation of cross-cutting concerns. The Transactional support of Spring is similar to AOP. The ability of Spring and AspectJ to allow you to inject objects into other objects that are not managed by Spring is another great example. Transactions, Security, Logging and anything other than the business at hand is a cross cutting concern. The goal of AOP is to separate those away from the code. If you didn’t use AOP, then you would have to actually control these elements your self. Take for example that you want to check that a user is valid before method calls. Without AOP, you would have to write some checkUserIsValid() method and call it at the beginning of each method. Using AOP, you could merely mark with an annotation or Aspects that each method of a certain class call another method on another class as an interceptor.

Spring is also for simple projects

You may be thinking Spring is too heavy weight for the task at hand… nonsense. I will guarantee that Spring, used properly, will reduce the amount of code in your project by at least 25%. That is 25% less code for your to maintain or write in the first place. Also, Spring provides even tools to accomplish the small tasks such as the following:

  • Finding resources on the classpath or file system (ResourceLocator)
  • Finding classes with a certain annotation
  • Generating JSON/XML from objects and vice versa (Jackson support)
  • load multiple property files and substitute variables inside of your spring XML files (Useful when promoting to different environments)
  • Ability to treat a JNDI resource as just another bean
  • Ability to treat a web service as just another bean
  • JDBCTemplate for issuing database queries and batch framework for batch operations
  • Spring Data for NOSQL support with Mongo
  • MVC Content negotiation to convert a POJO to JSON,XML, PDF (iText), Excel (POI) and more
  • Security integration that supports Windows, Active Directory, Kerberos, Basic, Digest, etc.
  • Robust Testing adapters for Junit and TestNG

I could spent a week delivering Spring training to a group of developers and only scratch the surface of what is there. Without a doubt though, when I have a tough problem to solve or a simple Java project, I always utilize parts of Spring. Conveniently, Spring is broken up into modules so that you can only include the ones that have the functionality you need to avoid causing any project bloat.

Conclusion

With Spring being the #1 Java framework, I highly recommend spending some time getting familiar with it and I recommend getting some training as well from someone like myself who is an expert with the framework who can show you everything it has to offer before you start utilizing it. You can also get training directly from vmWare, the company that owns SpringSource.

Create a winning web services strategy with a hub

SOA Architecture and Enterprise Service Bus aren’t new concepts to IT but the drive to migrate to RESTful services over the past few years has made them more relevant than they once were. A good rule of thumb to follow to determine if you need an ESB is to analyze and visualize what your infrastructure looks like. Service bus architecture works best when your architecture looks like a wagon wheel or “hub” with spokes, multiple systems whether they be external and internal communicating with a central “brain” or repository of information.

Let’s look at some of the reasons an ESB implementation is a good idea.

  • An ESB provides a facade or interface on top of external systems that your applications need to interact with providing you with ability to replace those external systems at will without changing the in front of the ESB. Simple interface-driven design at it’s best.
  • An ESB provides a common home for your enterprise business logic, data transformations and hard core systems interactions.
  • User interfaces into the system can also be more readily shifted from one technology to another making you more agile. Also, multiple interfaces can run off the same ESB services, e.g. iPhone, iPad, Ajax framework UIs, etc.
  • An ESB gives you an external API for other business partnerships to integrate with you… Most often third-party integrations are an afterthought resulting in a reactionary measure to accommodate another system
  • An ESB encourages code reuse in your enterprise…
  • An ESB gives you a set of standards, SOAP, MTOM, REST, JSON, etc. Standards make integrations much simpler.
  • Developers are no longer accessing your databases directly with code. Control over performance is pushed back to the ESB tier.
  • An ESB is ideal in an environment where you need “translation”, multiple systems in .NET, JAVA, PHP, C++ and legacy
A lot of great reasons to use an ESB in your system architecture, but an ESB isn’t the catch all solution. It has a few drawbacks that need to be considered.
  • ESB implementations aren’t for the faint of heart. It takes some expertise to plan and execute a successful hub.
  • ESB-based systems have an increase in network chatter since all interactions are back to the ESB via the network.
  • There are not many open source solutions for ESB implementation. MuleESB is the leading one for Java. Commercially, BizTalk, webMethods and TIBCO are available but extremely expensive, but also feature rich..
  • In larger IT organizations, an ESB implementation is usually confined to a small group and other groups tend to ignore it’s existence and continue to write logic straight into their projects, creating a silo approach of small isolated projects with duplicated logic and code.
  • ESB can create a bottle neck in larger organizations that have many projects running in parallel. The ESB team has to grow to meet the demand by other groups and service their requests.
  • An ESB doesn’t make sense if you have a small number of system interactions. For example, a product company with a single product and database doesn’t need an ESB implementation. Again, it goes back to what your visualization looks like. In this example, the diagram one be a single spoke and we are looking for the wagon wheel…
ESB, HUB, SOA… whatever you want to call it, it is all about the same thing. Reducing complexity, reduce cost of ownership, increase agility and provide easy integrations to make your IT organization successful.

Avoid “Roll your own” and reduce software cost of ownership

A wise man once said that just because you can do something doesn’t mean it follows that you must do it. I can’t think of a better industry to apply this to than software. Think about the business domain that you are in for a minute… it could be healthcare, finance, content management. Now turn your attention to the software you are writing and analyze how much time and money you spend doing custom software to facilitate the business. Now think about all of the components of said software that do not apply strictly to your business domain like Logging, Auditing, Utilities, Remote communications, etc. Were these concerns that are not related directly to your business written by developers in house or were they simply acquired from a third party like open source? Chances are that if you answered the former, your cost of ownership is substantially higher than it needs to be.

When languages are in their infancy, “rolling your own” solutions to cross-cutting concerns is a necessity, but quickly the commercial software and open source communities catch up and provide solutions that can fit your needs. In the case of legacy software where, at the time, there was no choice, a custom approach is warranted. My philosophy has always been this, “Let a software company do what it’s good at… writing software, and turn your attention to your own business”. After all, if you’re a healthcare company, you’re not in the business of custom software, but more importantly, your not in the business of writing logging software, as an example.

Unfortunately, there are still IT departments that commit the cardinal sin of “reinventing the wheel” because they want total code ownership. For example, I have heard the statement, “Why would I use Jetty as an embedded HTTP server when I could roll my own in a few hours” or “Why would I use Log4J when I have a simple class that does the same thing?”. Now on the surface, the statements can seem innocent enough, but let’s look at the consequences of the decision of going this route on the organization in a timeline.

  1. Developer writes custom logging component (1 day)
  2. New requirements for logging come up that developer must implement (2 days)
  3. Bugs and maintenance of said logging component over 2 years (10 days)
  4. Developer leaves to get another jobs and another developer has to take over and learn his code (1 day)
  5. QA and Testing of this component over two years (4 days)
  6. Developer leaves again and new developer takes over (1 day)
  7. Someone comes to their senses and replaces the component with an open source component (2 days)
Total cost of owning your own logging component: 21 days over a 2 year period
Software engineers have a habit of only considering their ecosystem when writing features into software. The downstream efforts that increase cost and total cost of ownership should always be considered. Let’s look at how this could have been done differently.
  1. Developer needs a custom logging component so he chooses the open source solution Log4J or Log4.net and drop into project (2 hours)
  2. New requirements for logging come up. These requirements are already supported through configuration. (0 hours)
  3. Bugs and maintenance of logging software is done by open source community and developer just updates library (.5 hours)
  4. Developer gets another job and a new developer comes in already understanding the component (0 hours)
  5. QA and Testing of the component was done by open source community (0 hours)
  6. Developer leaves again, but another developer comes in already knowing the component (0 hours)
  7. A new widely used open source framework is available that replaces said component, so component is switched out (2 hours)
Total cost of owning your own logging component: 4.5 hours over a 2 year period
By using a library maintained and developed by someone else, we saved over 20 days that could have been spent working on our core business.
I am very fond of saying that I will spend 2 days looking for an open source component that would take me only two hours to write myself. If you consider your entire ecosystem in the process of implementing a solution and the example I gave above, this statement makes a lot of sense. Consider the following… hundreds of developers devote their time and energy to a single open source solution. Only the arrogant would assume that one person could do it better. Of course, most developers can write a logging framework themselves but their efforts are better spent doing what they were hired to do and that is addressing software that is specific to your problem domain.

Fusion: Mimic NSNotification functionality from Obj-C into Java with Spring

The first time you heard of an NSNotification in Objective-C, you probably thought it was a mechanism that is used to notify a user of an event, but you quickly learn that it is designed to send event notifications from one part of your app to another. For example, you are using ASIHttp to load data asynchronously from a RESTful Web Service and you want to know when that data is loaded so that you can refresh a UI component. There are other ways to achieve this besides using NSNotification like a delegate or a reference to the object to call but that requires a tighter coupling that we would often like since layering and separation of the code is important for maintainability.

Let’s look at an example of sending and receiving an NSNotification

//Class 1
[[NSNotificationCenter defaultCenter] postNotificationName:NOTIFICATION_CONNECTION_FAILURE object:nil];
//Class 2
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(connectionFailure:) name:NOTIFICATION_CONNECTION_FAILURE object:nil];

//somewhere after the event is fired, maybe in viewWillDisappear delegate method
[[NSNotificationCenter defaultCenter] removeObserver:self name:NOTIFICATION_CONNECTION_FAILURE object:nil];

Pretty simple mechanism for wiring up a notification, but what if we are in Java and we want to use the same mechanism. Often the reason for doing this is Java is that you have a hierarchy of UI components in Swing, GWT or Vaadin and you have an event that takes place deep in one hierarchy that needs to be received deep in another. In this cases a listener won’t work because the objects are not related to each other and using a listener would require coupling them tightly together. Spring has just such an abstraction to assist for this purpose.

The following technique will decouple your code from itself but will add a coupling to Spring and JMX. This is an acceptable tradeoff. Let’s look at how we can publish a notification. To make this work, you need to follow the instructions for creating a Bean that you want to manage and mapping it in the mbean exporter per the Spring documentation. Also, make note that this technique is for beans that have been exported as managed beans in JMX. For a more rudimentary notification mechanism just use the raw JMX API Notification API itself.

package com.domain.notification;

import org.springframework.jmx.export.notification.NotificationPublisherAware;
import org.springframework.jmx.export.notification.NotificationPublisher;
import javax.management.Notification;

@ManagedResource(objectName="bean:name=MyTestNotification", description="My Managed Bean", log=true,
    logFile="jmx.log", currencyTimeLimit=15, persistPolicy="OnUpdate", persistPeriod=200,
    persistLocation="foo", persistName="bar"
public class MyTestNotification implements MyBean, NotificationPublisherAware {

    private String name;
    private int age;
    private boolean isSuperman;
    private NotificationPublisher publisher;

    // other getters and setters omitted for clarity

    @ManagedAttribute(description="The Age Attribute", currencyTimeLimit=15)
    public int add(int x, int y) {
        int answer = x + y;
        this.publisher.sendNotification(new Notification("add", this, 0));
        return answer;
    }

    public void setNotificationPublisher(NotificationPublisher notificationPublisher) {
        this.publisher = notificationPublisher;
    }

public void setAge(int age) {
    this.age = age;
  }

  @ManagedAttribute(description="The Name Attribute",
      currencyTimeLimit=20,
      defaultValue="bar",
      persistPolicy="OnUpdate")
  public void setName(String name) {
    this.name = name;
  }

  @ManagedAttribute(defaultValue="foo", persistPeriod=300)
  public String getName() {
    return name;
  }

  @ManagedOperation(description="Add two numbers")
  @ManagedOperationParameters({
    @ManagedOperationParameter(name = "x", description = "The first number"),
    @ManagedOperationParameter(name = "y", description = "The second number")})
  public int add(int x, int y) {
    return x + y;
  }

  public void dontExposeMe() {
    throw new RuntimeException();
  }
}

In the case of this class, the notification will pop when someone calls the add() method of the MyTestNotificationBean. add() is a managed method in JMX at this point. Now lets look at how to receive this notification from another class.

package com.domain.notification;

import javax.management.AttributeChangeNotification;
import javax.management.Notification;
import javax.management.NotificationFilter;
import javax.management.NotificationListener;

public class EventNotificationListener
               implements NotificationListener, NotificationFilter {

    public void handleNotification(Notification notification, Object handback) {
        System.out.println(notification); //receive the add notification
        System.out.println(handback);
    }

    public boolean isNotificationEnabled(Notification notification) {
        return AttributeChangeNotification.class.isAssignableFrom(notification.getClass());
    }

There are many ways to wire this together to make it work and this is just one example. The disadvantages of this approach is that it is nowhere near as simple as the NSNotification mechanism in Objective C and it provides some overhead since the MyTestNotification class will need to be mounted as an managed bean in JMX.

As mentioned, we can also forgo Spring and use straight JMX to achieve our goal. Let’s look at an example.

import javax.management.*;

public class NyNotification
        extends NotificationBroadcasterSupport
        implements MyBean {

public void myMethod() {

Notification n =
	    new AttributeChangeNotification(this, 10,
					    "I changed",
					    "Change",
					    "int",
					    20,
					    this.test);

	/* Now send the notification using the sendNotification method
	   inherited from the parent class NotificationBroadcasterSupport. */
	sendNotification(n);

}

   @Override
    public MBeanNotificationInfo[] getNotificationInfo() {
        String[] types = new String[]{
            AttributeChangeNotification.ATTRIBUTE_CHANGE
        };

        String name = AttributeChangeNotification.class.getName();
        String description = "An attribute of this MBean has changed";
        MBeanNotificationInfo info =
                new MBeanNotificationInfo(types, name, description);
        return new MBeanNotificationInfo[]{info};
    }

@Override
 public void handleNotification(NotificationListener listener, Notification notification, Object handback) {
        System.out.println(notification); //receive the notification
        System.out.println(handback);
    }
...

Still a lot of coupling going on here and a disadvantage is that we are extending NotificationBroadcasterSupport. While we may be creating a nice way to mimic notifications, these techniques should be used as a “last resort” mechanism.

Another mechanism that can be used for this purpose is the ApplicationContext’s publishEvent(ApplicationEvent event) method or ApplicationEventPublisher if you have no access to the an ApplicationContext. This is a nice quick way to register an event and get a notification. Let’s look at the example.

...
       //We'll publish en event somewhere
        MyEvent fileUploadEvent = new MyEvent(this, user);
        getApplicationContext().publishEvent(fileUploadEvent);

//now somewhere else we can receive it.

public class EventListener implements ApplicationListener {

	@Override
	public void onApplicationEvent(ApplicationEvent evt) {
		if (evt instanceof MyEvent)
                   //do something

...

We need to register our sync executor in the ApplicationContext.xml

<bean id=”applicationEventMulticaster” class=”org.springframework.context.event.SimpleApplicationEventMulticaster”>
<property name=”taskExecutor”>
<bean class=”org.springframework.scheduling.timer.TimerTaskExecutor”/>
</property>
</bean>

or as an alternative, we can extend SimpleApplicationEventMulticaster

public class GlobalEventMulticaster extends SimpleApplicationEventMulticaster implements InitializingBean {

private ApplicationContext application;

 public void afterPropertiesSet() throws Exception {

        if (this.application == null) {

            this.application = ///get Your application context

        }

    }

References

JMX Notifications

Spring/JMX Notifications

Good developers program in a language, talented developers code

By Chris L Hardin
Sr. Software Architect

Have you ever heard of “framework fatigue”? This term is meant to describe the creep of hundreds of third-party frameworks into development projects. Ten years ago, there wasn’t a whole lot of choice out there for Java, my current language of choice, so the average number of third-party libraries included in a project were 1-5, but today, the average has grown to around 30. You’ve got Spring, Hibernate, JUnit, Struts, Commons, TestNG, Joda, Unitils, DBUnit, iBatis just to name a few in the Java space and each of these have dependencies on other libraries and those have dependencies on others. I could rattle off another list for C#. While I don’t think that choice is a bad thing, and while I tend to use 20-30 third-party libraries in a project, I do think that there have been certain side effects of this that have been detrimental to technology. I am going to address what I think is the biggest.

Getting a Job

When did getting a job become more about knowing a specific framework and not being an expert on the Java language? I have seen managers walk over qualified resumes looking for the names of frameworks only to land on someone less qualified who decided to put a particular framework on their resume.


Kevin Rose, CEO of Digg.com, said the next time he hires for a project, he is going to hire for talent rather than technology. He said that when he was hiring for a project, he looked for developers working in PHP, but after placing the individuals, he decided to branch out to other technologies and the developers he hired, weren’t able to make the transition and dare I use the term, “Think outside the box”. A talented developer may know PHP, but can easily ramp up on any other technology, whereas, a developer with merely a toolbox, may not necessarily be able to assimilate other technologies fast enough if at all.

Recruiters and managers are the worst offenders here. These folks are not necessarily technology experts so they try to cultivate a candidate that has the exact blend of frameworks that the target company is using. While this doesn’t necessarily always result in a poor hire, it does tend to exclude perfectly qualified candidates with real talent.

Let’s look at an example that I ran across recently. A manager in Denver had a requirement for a developer with Struts 2, so he excluded any candidate without Struts 2 knowledge. In reality, he could expand his search to include an older version of Struts or just MVC frameworks in general. The principles are the same, the technical details can be learned quickly. A talented candidate can take adapt and move with your enterprise. This is what Rose was trying to get across.

Ten years ago, having just the knowledge of a language or knowing one object-oriented language, could get you a job doing Java or C++, to name two of the bigger choices. Now, you have to learn and have experience with every framework imaginable just to get your resume to a hiring manager. This is why the tech sector says there are shortages in the development field of highly-qualified labor. Heaven forbid we have a shortage of Java Server Faces developers… Most of you know how I feel about JSF so you get the joke.

Java, in particular is plagued with frameworks and they change rapidly. Five years ago, it was J2EE, EJB and such APIs and Struts, then Spring, Hibernate and more lately Grails/Groovy. My point is that it is impossible to know all these frameworks and it is also impossible to know some frameworks completely. Spring, for example, is just too large for any one person to hold all the knowledge on it’s features. Even if you could learn it all, two or three new versions would be out by that time and you learned the first. The key here is familiarity and talent. A rudimentary understanding of what a framework is used for and a little research, will give you what you need to get the job done.

Here is a little secret that developers have known for years and non-technical people have yet to figure out. It doesn’t matter what language a developer knows, they are all similar. A talented developer has an interpreter and compiler in his head and thinks in pseudo-code anyway. Applying that to a language or framework is just a matter of figuring out the syntax…and that is the easy part.


The latest and greatest JUnit features… expand your toolset

Junit has been around for some time and it is still a favorite of mine, well it became my favorite after it moved to annotations instead of having to extend that pesky TestCase class. Modern Junit tests are small concise and extremely handy when it comes to testing your code. Let’s look at some of the latest features that Junit offers and see how they can help you write better code with Test Driven Development paradigm.

Categories

Each test method and test class can be annotated as belonging to a category:

 
public static class SomeUITests {
@Category(UserAvailable.class)
@Test
public void askUserToPressAKey() { }

@Test
public void simulatePressingKey() { }
}

@Category(InternetConnected.class)
public static class InternetTests {
@Test
public void pingServer() { }
}

To run tests in a particular category, you need to set up a test suite. In JUnit 4, a test suite is essentially an empty annotated class. To run only tests in a particular category, you use the @Runwith(Categories.class) annotation, and specify what category you want to run using the @IncludeCategory annotation

 
@RunWith(Categories.class)
@IncludeCategory(SlowTests.class)
@SuiteClasses( { AccountTest.class, ClientTest.class })
public class LongRunningTestSuite {}

You can also ask JUnit not to run tests in a particular category using the @ExcludeCategory annotation

 
@RunWith(Categories.class)
@ExcludeCategory(SlowTests.class)
@SuiteClasses( { AccountTest.class, ClientTest.class })
public class UnitTestSuite {}

assertThat

Two years ago, Joe Walnes built a new assertion mechanism on top of what was then JMock 1. The method name was assertThat, and the syntax looked like this:

assertThat(x, is(3));
assertThat(x, is(not(4)));
assertThat(responseString, either(containsString("color")).or(containsString("colour")));
assertThat(myList, hasItem("3"));

More generally:

assertThat([value], [matcher statement]);

Advantages of this assertion syntax include:

  • More readable and typeable: this syntax allows you to think in terms of subject, verb, object (assert “x is 3″) rather than assertEquals, which uses verb, object, subject (assert “equals 3 x”)
  • Combinations: any matcher statement s can be negated (not(s)), combined (either(s).or(t)), mapped to a collection (each(s)), or used in custom combinations (afterFiveSeconds(s))
  • Readable failure messages. Compare
    assertTrue(responseString.contains("color") || responseString.contains("colour"));
    // ==> failure message:
    // java.lang.AssertionError:


    assertThat(responseString, anyOf(containsString("color"), containsString("colour")));
    // ==> failure message:
    // java.lang.AssertionError:
    // Expected: (a string containing "color" or a string containing "colour")
    // got: "Please choose a font"
  • Custom Matchers. By implementing the Matcher interface yourself, you can get all of the above benefits for your own custom assertions.
  • For a more thorough description of these points, see Joe Walnes’s original post.

We have decided to include this API directly in JUnit. It’s an extensible and readable syntax, and it enables new features, like assumptions and theories.

Some notes:

  • The old assert methods are never, ever, going away. Developers may continue using the old assertEquals, assertTrue, and so on.
  • The second parameter of an assertThat statement is a Matcher. We include the Matchers we want as static imports, like this:
    import static org.hamcrest.CoreMatchers.is;

    or:

    import static org.hamcrest.CoreMatchers.*;


Assumptions

Ideally, the developer writing a test has control of all of the forces that might cause a test to fail. If this isn’t immediately possible, making dependencies explicit can often improve a design.
For example, if a test fails when run in a different locale than the developer intended, it can be fixed by explicitly passing a locale to the domain code.

However, sometimes this is not desirable or possible.
It’s good to be able to run a test against the code as it is currently written, implicit assumptions and all, or to write a test that exposes a known bug. For these situations, JUnit now includes the ability to express “assumptions”:

import static org.junit.Assume.*

@Test public void filenameIncludesUsername() {
assumeThat(File.separatorChar, is('/'));
assertThat(new User("optimus").configFileName(), is("configfiles/optimus.cfg"));
}

@Test public void correctBehaviorWhenFilenameIsNull() {
assumeTrue(bugFixed("13356")); // bugFixed is not included in JUnit
assertThat(parse(null), is(new NullDocument()));
}


Theories

More flexible and expressive assertions, combined with the ability to state assumptions clearly, lead to a new kind of statement of intent, which is called a “Theory”. A test captures the intended behavior in one particular scenario. A theory captures some aspect of the intended behavior in possibly infinite numbers of potential scenarios. For example:

@RunWith(Theories.class)
public class UserTest {
@DataPoint public static String GOOD_USERNAME = "optimus";
@DataPoint public static String USERNAME_WITH_SLASH = "optimus/prime";

@Theory public void filenameIncludesUsername(String username) {
assumeThat(username, not(containsString("/")));
assertThat(new User(username).configFileName(), containsString(username));
}
}

This makes it clear that the user’s filename should be included in the config file name, only if it doesn’t contain a slash. Another test or theory might define what happens when a username does contain a slash.
UserTest will attempt to run filenameIncludesUsername on every compatible DataPoint defined in the class. If any of the assumptions fail, the data point is silently ignored. If all of the assumptions pass, but an assertion fails, the test fails.

Theories match data points by type, so if you have a theory that take an int as an argument and you have 5 int @Datapoints, your theory will run a total of 5 times. This gets more complicated if you add a second parameter that is a double since you will get every possible permutation ran for the combination of the number of total parameters.

I’d hate to wrap up a discussion about Junit without mentioning TestNG. Both TestNG and Junit are competing testing frameworks but they are not mutually exclusive from one another. I use both in my projects normally since I can take advantage of features one has that the other ones does not. That being said, I usually lean more toward Junit during development since it is the most widely recognized and understood by more people I work with. Both are great testing frameworks so it makes it difficult to pick on over the other so I just don’t.