Tuesday, 10 April 2012

Writing lightweight REST integration tests with the Jersey Test Framework

Writing REST services with JAX-RS (and its reference implementation Jersey) is easy. A class annotated with @Path and some methods with @GET, @POST, ... annotations is enough for a fully functional REST service. Real-world applications however are more complex. There are request-filters for authorization and access control, context-providers for injecting data-access-objects, mappers that convert exceptions to appropriate http responses, MessageBodyReaders and -Writers to convert JSON and XML to and from Java objects, and so on. All these components can (and should) be tested using unit-tests. But this is not enough. To be sure, that these components work together correctly, integration tests are needed. These can be costly to run. They always need the full environment to be configured and running. And the more complex an application, the more complex it is to set up this environment (web-server, database, search-engine, message-queue, ...).

The Jersey Test Framework offers the possibility to write lightweight integration-tests, that do not need any external resources to be available. The web-container, where all components (resources, filters, mappers, ...) run, is configured and started on-the-fly. Moreover, it is possible to provide mocks for data-access-objects and thus extinguish the need for external services.

A short introduction to the Jersey Test Framework can be found in the jersey documentation: http://jersey.java.net/nonav/documentation/latest/test-framework.html. The code for the following example is available on github: https://github.com/mlex/jerseytest.

Example REST service

Let's start with a simple example. The following class implements a simple TODO-service. You can get a list of TODOs, add new TODOs and remove a TODO from the list.

public class TodoResource {
    private TodoService todoService;

    public String getTodos() {
        return StringUtils.join(todoService.getAllTodos(), ",");

    public void addTodo(String newTodo) {

    public void removeTodo(@PathParam("todo") String todoToRemove) {

The instance of TodoService is injected into the REST-resource using a simple SingletonInjectableProvider:

public class TodoServiceProvider extends
       SingletonTypeInjectableProvider {

    public TodoServiceProvider() {
        super(TodoService.class, new TodoService());

To keep the example as simple as possible, the TodoService simply stores the TODOs in a list. In a real-world application, the service would write the todos into a database, of course.

public class TodoService {
    List todos = new ArrayList();

    public List getAllTodos() {
        return new ArrayList(todos);

    public void addTodo(String todo) {

    public boolean removeTodo(String todo) {
        if (todos.remove(todo) == false) {
            throw new TodoNotFoundException();

The most interesting part of this example (and the part, that demonstrates the need for integration tests on top of unit-tests) is the exception, that is thrown in the removeTodo method. This exception is not catched in the TodoResource. It will be propagated and finally be transformed into a 400-Response by the following exception-mapper:

public class NotFoundMapper implements ExceptionMapper {
    public Response toResponse(TodoNotFoundException e) {
        return Response.status(Response.Status.BAD_REQUEST)

With these classes, our todo-service is ready to use. To check if everything is working, we can use curl:

curl -XPOST -H "Content-Type: text/plain" --data "fetch milk" \
curl -XPOST -H "Content-Type: text/plain" --data "call steve" \
curl -XGET http://localhost:8080/mjl-jersey-server/todo
# fetch milk,call steve

curl -XDELETE http://localhost:8080/mjl-jersey-server/todo/fetch%20milk
curl -XGET http://localhost:8080/mjl-jersey-server/todo
# fetch milk

curl -v -XDELETE http://localhost:8080/mjl-jersey-server/todo/fetch%20milk
# ...
# < HTTP/1.1 400 Bad Request
# ...
# TodoNotFoundException

Testing the REST-service

Now we want to write tests for the todo service. Testing the get- and add-todo methods with the jersey test framework wouldn't be much different from simple unit-tests. The power of the jersey test framework becomes clear, when testing the remove-todo method. When a user wants to delete a non-existent todo, we expect the service to return a 400-response. Ensuring this with standard unit-tests would be hard. The test case using the jersey test framework is quite simple.

class TodoResourceTest extends JerseyTest {

    public static TodoService todoServiceMock = Mockito.mock(TodoService.class);

    public WebAppDescriptor configure() {
        return WebAppDescriptor.Builder(

    public void shouldReturn400OnNotFoundException() {
        String todo = "test-todo";
            .thenThrow(new NotFoundException());
        ClientResponse response = resource().path("todo/"+test-todo")
        Assertions.assertEquals(Status.BAD_REQUEST, response.getClientStatus());

    public static class MockTodoServiceProvider extends
           SingletonTypeInjectableProvider {
        public MockTodoServiceProvider() {
            super(TodoService.class, todoServiceMock);

Some explanations:
Because we do not want to connect to a external database, the TodoService has to be mocked. This is done by defining a provider, that injects a mocked TodoService. Because we also want to configure the mock-object inside our test, the MockTodoServiceProvider is defined as inner class of the test and the mock-object is stored in a class variable of our test class.

The test is configured to use a GrizzlyWebTestContainer. See the last part of this blog-post for advantages and disadvantages of using other containers. The configuration of the test-container is done in the configure() method.

In the test method itself, the TodoService mock is instructed to throw a TodoNotFoundException, when the removeTodo() method is called. A WebResource pointing to our test-container is created and a DELETE request is fired. If everything works fine, the result of this request must be a 400 error. And the response-body must contain the reason for the error.

In the same way, you can also test other components, like authorization-filters, access-control and response-mappers (Jackson or JAXB) without the need of external environment to be present. Of course, there is also a downside of using this kind of tests: they are rather slow. The on-the-fly setting up and tearing down of the web container is very expensive. Another disadvantage is, that most test-containers use real system ports for their communication (the only exception is the InMemoryContainer, which has other shortcomings). These ports may be blocked by other applications, whath causes the tests to fail. This is a problem, when using helpers like infinitest, where it can happen, that multiple tests are run at the same time.

Integrated Client-Server-Tests

If there is also a java-based client-implementation for the REST-service, this client can be used in jersey tests, too. Our example TODO-service comes with such a client-implementation:

public class TodoClient {

    public static final String TODO_RESOURCE_PATH = "/todo";

    private final String uri;

    private final Client client = new Client();

    public TodoClient(String uri) {
        this.uri = uri;

    public WebResource resource() {
        return client.resource(uri).path(TODO_RESOURCE_PATH);

    public WebResource resource(String todo) {
        return resource().path("/" + todo);

    public String getAllTodos() {
        String todos = resource().get(String.class);
        return todos;

    public void addTodo(String todoToAdd) {

    public void removeTodo(String todoToRemove) {
        try {
        } catch (UniformInterfaceException e) {
            if (e.getResponse().getClientResponseStatus() == 
                    Response.Status.BAD_REQUEST) &&
                "TodoNotFoundException".equals(e.getEntity(String.class))) {
                throw TodoNotFoundException();
            } else {
                throw e;

The most interesting part of this client is again the removeTodo() method. It not only executes the HTTP request, but also checks if the request failed because the todo to delete did not exist. This is done by checking the response-code and the response-body. This can be used to simplify the jersey test:

    private TodoClient todoClient() {
        TodoClient todoClient = new TodoClient(getBaseURL());
        Whitebox.setInternalState(todoClient, "client", client());

    @Test(expected = NotFoundException.class);
    public void removeTodoShouldThrowNotFoundException() {
        final String todo = "test-todo";
            .thenThrow(new NotFoundException());

Now this test really cannot be called a unit-test anymore. In these few lines, we check, that the TodoNotFoundException thrown by the TodoServic is correctly converted in a HTTP-Response, that our client understands and converts back to the appropriate TodoNotFoundException. If any of the involved components is changed, without changing affected components, the test will fail.

Tips and Tricks

Decide what type of container to use before writing tests

There are two kinds of containers available for the jersey test framework: high-level servlet containers and low-level containers. Both have advantages and disadvantages.

The high-level servlet containers offer the full functionality of a servlet container, automatically injecting instances of HttpServletRequest, ... . If your application relys heavily on servlet specific classes, these containers will be your first (and probably only) choice. The servlet functionality comes at a price: All implementations need to open system ports, which makes the tests more fragile and also a little bit slower. Another drawback of using real servlet containers in tests is, that you don't have direct access to the instances of your resources and (context-)providers. To allow the use of mock-objects, you must work around this problem, for example by assigning context-objects to static fields, as we did with the mocked TodoService.

Low-level containers on the other hand, allow you to directly modify the ResourceConfig used. Like this, you have access to all instances (resources, providers, filters) used for the rest service. This greatly simplifies mocking. So if you don't rely on the servlet-api, you'll probably go for a low-level container.

Do not use WebAppDescriptor for low-level containers

Althoug possible, I do not recommend using WebAppDescriptors for low-level containers. The reason lies in the method LowLevelAppDescriptor::transform(), that is used to transform a WebAppDescriptor to a LowLevelAppDescriptor, when a low-level container is used. The method simply ignores all non-boolean init-params. Moreover, there is a bug when using the property com.sun.jersey.config.property.packages with multiple (colon-separated) package-names. Even if these shortcomings get fixed, you should not rely on the transform() method. The power of low-level containers lies in the possibility to directly modify the used ResourceConfig, which is only possible when using a LowLevelAppDescriptor.

Speedup jersey tests

Because the JerseyTest base class starts a new web-container before each test, the tests are rather slow. One possibility to speed them up, would be to start a web-container only once per test-suite. A implementation for a base class doing this is included in the example-application.

Extended InMemoryTestContainer

The InMemoryTestContainer is the only container, that does not open any real ports on the system. Of course, being a low-level container, no servlet-specific functionality is available with this container. But if you do not rely on the servlet-api too much, this container is the perfect choice to write really fast and lightweight integration tests.

However the InMemoryTestContainer, coming with the jersey test framework, has another drawback: you cannot declare any request- or response-filters, because they are overridden by logging filters. To work around this problem, I implemented my own in-memory-test-container (basically only copying the original code and removing the logging filters). The code is also included in the example application.


  1. Thank you.

    This is the most straight-forward and complete description I've seen on injecting dependencies into a jersey test container. FWIW, I used it to inject a mock HttpServletContext into my resource, and so, avoid using the grizzly container. A slight abuse of your approach, but perfect for my needs.

    Thanks again, for taking the time!

    1. Hi Allen,
      I'm glad, that the post was helpful. If you are interested in the topic of In-Container-Integration-Testing, you should take a look at the Spring TestContext Framework and the @WebAppConfiguration annotation. The approach there is much cleaner than the Jersey Test Framework. But unfortunately only for Spring-Web applications and not for JAX-RS.


  2. I want to ask you, where exactly you defined, that all JerseyTest run per Suit. Thank you

    1. The tests are simple JUnit4 Tests. If you execute the tests with the JUnit Test Runner, all Tests will be placed in one "test suite".

  3. Thanks for the thorough introduction to JerseyTest!

    There are two problems with the code (probably got cut while editing HTML or something):
    1. return WebAppDescriptor.Builder(
    2. The last part of the following line:
    ClientResponse response = resource().path("todo/"+test-todo")