Martin Gladdish

Software 'n' stuff

Testing Jsr303 Annotated Beans

I have been looking at using jsr303 annotations to mark up some of my beans. The good news is that both Hibernate and Spring play nicely with these annotations, and will validate them at appropriate times. The bad news is that the examples of testing these annotations that I have found online so far look cumbersome and awkward.

Let’s take a simple example:

simple java bean with a single jsr303 annotation
1
2
3
4
5
6
7
8
9
10
public class SimpleBean {
    @NotNull
    private Long id;
    private String name;

    public SimpleBean(Long id, String name) {
        this.id = id; this.name = name;
    }
    // getters and setters...
}

This is fine so far and we have the possibility that the constructor could be passed a null id. So, let’s write a unit test that ensures our validation will barf.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public class SimpleBeanTest {

    private Validator validator;

    @Before
    public void setUp() {
        validator = Validation.buildDefaultValidatorFactory().getValidator();
    }

    @Test
    public void nullIdMustThrowValidationError() {
        SimpleBean bean = new SimpleBean(null, "valid string");
        Set<ConstraintViolation<SimpleBean>> violations = validator.validate(bean);
        assertThat(violations.size(), equalTo(1));
    }
}

If this test fails, we get something like this as the result:

1
2
3
4
java.lang.AssertionError: 
Expected: <1>
     but: was <0>
  ...

Not so pretty. This is where hamcrest’s matcher toolkit comes in handy. What if we could write our test like this instead?

unit test with custom validations matcher
1
2
3
4
5
6
7
public class SimpleBeanTest {
    @Test
    public void nullIdMustThrowValdationError() {
        SimpleBean bean = new SimpleBean(null, "valid string");
        assertThat(bean, hasNoViolations());
    }
}

A matcher implementation could look like this

custom jsr303 validation matcher
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
public class JSR303NoViolationsMatcher<T> extends TypeSafeMatcher<T> {

    private Validator validator;
    private Set<ConstraintViolation<T>> violations;

    public JSR303NoViolationsMatcher() {
        validator = Validation.buildDefaultValidatorFactory().getValidator();
    }

    @Override
    protected boolean matchesSafely(T t) {
        violations = validator.validate(t);
        return violations.size() == 0;
    }

    @Override
    public void describeTo(Description description) {
        description.appendText("no jsr303 validation violations");
    }

    @Override
    protected void describeMismatchSafely(Object item, Description mismatchDescription) {
        mismatchDescription.appendText("was ").appendValue(violations);
    }

    @Factory
    public static JSR303NoViolationsMatcher hasNoViolations() {
        return new JSR303NoViolationsMatcher();
    }
}

And our new-look test spits out the following upon failure:

1
2
3
java.lang.AssertionError: 
Expected: no jsr303 validation violations
     but: was <[ConstraintViolationImpl{interpolatedMessage='may not be null', propertyPath=id, rootBeanClass=class com.example.SimpleBean, messageTemplate='{javax.validation.constraints.NotNull.message}'}]>

There. Much nicer. Not only do we know that the test failed, but we are immediately told how it failed.

Thanks to planetgeek for a nice article about writing custom hamcrest matchers.

Using Typesafe Role Enums With Spring Security

Spring Security is a seriously useful tool but one of the things that has been nagging away at the back of my mind about it is that it is so heavily reliant on magic strings. Take a typical example:

spring security config fragment
1
2
3
4
5
6
<http>
    <intercept-url pattern="/**" access="ROLE_USER"/>
    ...
</http>

<global-method-security pre-post-annotations="enabled"/>

This configures all URLs in the application to require that the user is logged in and has the privilege named ROLE_USER.

sensitive method
1
2
3
4
@PreAuthorize("hasRole('ROLE_SYSADMIN')")
public String getSensitiveInformation() {
    return "Only special people are allowed to see this information";
}

This ensures that anyone calling this method (either directly or indirectly) must be logged in and have the privilege named ROLE_SYSADMIN.

So far, so straightforward.

The problem is that these role names proliferate across the codebase. Chances are you will want to refer to the same role name in many different places in your application. If these role names were in code, you would typically refactor them out into a single representation as soon as you had two different references to them in the codebase. Unfortunately, this is not so simple with Spring Security.

The good news is that it is possible.

Authority enum
1
2
3
4
5
6
package com.example;

public enum Authority {
    USER,
    SYSADMIN
}

Which can be referred to in Spring’s expression language like this:

spring security config using role enum
1
2
3
4
<http use-expressions="true">
    <intercept-url pattern="/**" access="hasRole(T(com.example.Authority).USER.toString())">
    ...
</http>

And similarly on method annotations

sensitive method with typesafe role enum
1
2
3
4
@PreAuthorize("hasRole(T(com.example.Authority).SYSADMIN.toString())")
public String getSensitiveInformation() {
    return "Still only special people are allowed to see this information";
}

Unfortunately IntelliJ Community Edition’s ‘Find Usages’ is not clever enough to return these Spring expression language references, but it does at least feel better than sprinkling identical magic strings across the codebase.

Inline Editing Forms With Javascript

I have picked up some contracting work to build a new web site. It is very early days and I am just putting together some initial workable web forms at the moment and want something for editing information that would be a little less ugly than just presenting a standard set of populated html form controls. Instead, I like the idea that the information will be written to the page as normal text, but clicking on it will allow you to change the value inline. After a little poking about, there appear to be two main contenders to help implement this in JQuery, Jeditable and Editable.

Feature comparison of Jeditable and Editable JQuery plugins
JeditableEditable
html 5 form elementsnono
extendable input typesyes,
via inbuilt addInputType method
yes,
by modifying the source directly
default POST data formatid:element_id value:element_valuenone

Adding html5 input types

Editable

Editable does at least provide an editableFactory in its source which is intended to be extended. Although easy to do, it still doesn’t get around the fundamental problem of having to modify the source itself.

modifying .editableFactory directly to handle ‘url’ input types
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$.editableFactory = {
  'text': {
      toEditable: function($this,options){
          $('<input/>').appendTo($this)
                       .val($this.data('editable.current'));
      },
      getValue: function($this,options){
          return $this.children().val();
      }
  },
  // adding custom support for <input type="url">
  'url': {
      toEditable: function($this,options){
          $('<input type="url"/>').appendTo($this)
                       .val($this.data('editable.current'));
      },
      getValue: function($this,options){
          return $this.children().val();
      }
  },
  // end of added block
  ...

Jeditable

There really is not much in it in terms of complexity or verboseness between the two implementations, but the fact that Jeditable allows extensions without modifying its source swings it for me.

Extending Jeditable’s set of supported input types
1
2
3
4
5
6
7
8
9
<script type="text/javascript">
    $.editable.addInputType('url', {
        element : function(settings, original) {
            var input = $('<input type="url">');
            $(this).append(input);
            return(input);
        }
    });
</script>

Posting changes back to the server

Both of the following examples work with the following template. You will also notice that this is not plain html, but a velocity template. The template is served from a spring/velocity application, more details of which will follow in later posts.

1
2
#springBind("myDomainObject.url")
<span class="editableURL">$status.value</span>

Editable

Although Editable doesn’t support this out of the box, it is pretty straightforward.

1
2
3
4
5
6
7
8
9
<script type="text/javascript">
    $(document).ready(function() {
        $('.editableURL').editable({type:'url', submit:'save', cancel:'cancel', onSubmit:end});
    });

    function end(content) {
        $.post(window.location.href, {url: content.current});
    }
</script>

So we wait for the document to be loaded by the browser, then decorate all elements with the “editableURL” class with Editable’s behaviour. The end function is triggered when you submit your change, and is responsible for POSTing the data back to the server. Note that, in this example, data is POSTed back to the same URL from which the page was served.

All in all, this is rather nice, and we have complete control over the data passed back to our server.

Jeditable

As mentioned in the comparison table above, Jeditable by default will POST data in the format id: element_id, value: element_value. This is fine for table-based data in which every field has an id, but a long way from OK for our domain-object-backed Spring application. The good news is that we get complete control over Jeditable’s POST behaviour, too.

1
2
3
4
5
6
7
8
<script type="text/javascript">
    $(document).ready(function() {
        $('.editableURL').editable(function(value, settings) {
            $.post(window.location.href, {url: value});
            return(value);
        }, {submit: 'save', type: 'url'});
    });
</script>

This snippet works broadly the same way, with the added control that the return value of the function is what is used as the new value displayed on the page. I can’t make up my mind whether this is a good thing. As the script stands, if the server encounters an error updating the field, then the new value will be displayed on the page and the user will be none the wiser that anything went wrong.

Conclusion

Although Editable is arguably preferred by Stack Overflow users, there is a little snag with both of them. Neither library seems to handle keyboard navigation particularly well (at least in Chrome on OS X). Editable’s submit and cancel buttons do nothing when they are triggered by receiving focus and hitting the spacebar, whereas Jeditable’s support is better in that the buttons do work, just that you have to be quick about it. If you leave the focus on the button for more than a second or so then it assumes you have given up and cancels the edit, taking away the form control again.

All in all, Jeditable seems to suit my preferences a little better. Its public revision control and nicer web site also make it seem just that little bit more polished. The only negative point, and it is a small one, is that Jeditable doesn’t provide a labelled version number. Judging by the readme on github it is up to version 1.7.2 at the time of writing, so that is the number I inserted into the filename myself before using it.

A complete example of using Jeditable
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
<!DOCTYPE html>
<html>
    <head>
        <title>Inline editing forms with JQuery and Jeditable</title>
        <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js" type="text/javascript"></script>
        <script src="/{your_path_here}/jquery.jeditable-1.7.2.mini.js" type="text/javascript"></script>
    </head>
    <body>
        #springBind("myDomainObject.url")
        <span class="editableURL">$status.value</span>

        <script type="text/javascript">
            $.editable.addInputType('url', {
                element : function(settings, original) {
                    var input = $('<input type="url">');
                    $(this).append(input);
                    return(input);
                }
            });
        </script>

        <script type="text/javascript">
            $(document).ready(function() {
                $('.editableURL').editable(function(value, settings) {
                    $.post(window.location.href, {url: value});
                    return(value);
                }, {submit: 'save', type: 'url'});
            });
        </script>

    </body>
</html>

How to Change a Headlamp Bulb on an Alfa Romeo GTV

The dipped headlamp bulb blew on my Alfa Romeo GTV the other day and I spent way too long today working out how to replace it. What’s more, there didn’t seem to be clear instructions that I could find on the net, so I thought the decent thing to do would be to write it up here.

Why Are All the Good Ideas Already Taken?

The other evening on the way home from work, I had a brilliant idea. It was one of those that had been churning away in the back of my mind for a while but suddenly popped out, fully formed, in a way that hadn’t occurred to me before. This was it, this was going to be the interesting side project that had real traction.

I had been reading again about Netflix’s Chaos Monkey. It is a brilliant idea - in order to be good at something, you need to do it often, and therefore the best way of handling failure is to fail often. The other half of the equation was an approach to code-reviewing unit tests that I have been using for a while. If you can delete a line in the class under test, and the unit test still passes, then you didn’t need that line in the first place (although unfortunately it is usually the other way around - that more scenarios need to be added to the test). What if you could combine the two ideas? What if you had a system that would, as part of the build, randomly modify your source files and verify that your tests failed? That would be brilliant. It would be a really useful indicator of how good your tests actually are (and, for good measure, fully automate the net output of some of the less useful developers I have worked with over the years!).

The first thing I needed was something that understands Java syntax. More than that, I wanted to make use of a tool that is already capable of modifying Java source files. Understanding syntax was important because my first use case was going to be finding a magic number and incrementing it by one. Let’s look at an example method:

(noddy_example.java) download
1
2
3
4
5
6
7
public String noddyExample() {
    String dontDoThisAtHome = "";
    for (int i = 0; i < 5; i++) {
        dontDoThisAtHome += i;
    }
    return "value: " + dontDoThisAtHome;
}

and its associated test:

(noddy_example_test.java) download
1
2
3
4
@Test
public void noddyValueIsCorrect() {
    assertThat(noddyExample(), startsWith("value"));
}

So far, so trivial. It is a rather contrived example, but we have all seen more complicated examples of effectively the same issue in production code. The essential point is that we are all happy. The source compiles, it passes its tests, and the system behaves as expected. It is worth pointing out that a traditional test code coverage metric is no good here because the loop did execute when the test ran, so is considered to be covered. So, we run our test chaos monkey on our source code and let it increment a magic number by one.

(noddy_example_monkeyed.java) download
1
2
3
4
5
6
7
public String noddyExample() {
    String dontDoThisAtHome = "";
    for (int i = 0; i < 6; i++) {
        dontDoThisAtHome += i;
    }
    return "value: " + dontDoThisAtHome;
}

If we re-run our test now, we find that the test still passes. We have changed our source file in a meaningful way and yet our tests have not proven good enough to catch the changed behaviour. This, ladies and gentlemen, is what we in the trade call A Bad Thing™.

Also, this is where I get to the point. Five minutes of googling later I find out that, of course, this is already a known and understood way of testing. It is called Mutation Testing and people smarter than me have been doing it for years. Then, like all these sorts of questions, once you know the name for it all sorts of useful information comes flooding out of search engines. There is a stackoverflow question on how to integrate Java mutation testing with Maven and an answer directing me to PIT, an open source mutation testing tool. The question itself lists a bunch of other Java mutation testing tools; one of which, Jester, has been the subject of an IBM Developer Works article and a rather nice quote from Robert C. Martin on how it guided him to a more simple implementation of a test-driven coding exercise. So this really is not a new, cute, extension to our now traditional unit test safety net, but just me independently stumbling upon an idea that is apparently an already solved problem. That is great news from the point of view of getting this idea quickly in to practice in the day job, but just doesn’t feel as much fun as if it were my own idea.

So why are all the good ideas already taken? People smarter than you or I thought of them years ago.

HDD v SSD Compiling Benchmarks

If there ever was a subject that caused more unfounded claims and counter-claims, it’s performance benchmarking. Given the dearth of useful info I’ve been able to find about whether SSDs improve compilation time or not, I thought I would put up the numbers that I have found during a small test I did today.

Program to be compiled:

  • Java webapp of ~10,000 executable lines of code
  • Maven 3
  • Java 1.6

New system:

  • HP Compaq 8100 Elite Convertible Minitower PC
  • Intel Core i7
  • 6GB ram
  • 64-bit Windows 7
  • OCZ Vertex SATA SSD 60GB
  • Barracuda 7200.12 SATA 3Gb/s 500GB HDD

Old system:

  • Dell Dimension 9200
  • Intel Core 2 6300
  • 4GB ram
  • SATA 260GB HDD

Benchmark Results

All times are as reported by Maven, not of the entire duration of the shell command. Test command: mvn clean install

So, across the three build runs I performed on each environment, the timings are pleasingly consistent. The good news is that my new PC significantly outperforms my old one. The somewhat surprising news is that the HDD is consistently marginally quicker than the SSD. This suggests that this maven goal is not bound by disk I/O, that it is something else about my new system’s spec that gives such a performance boost. This actually reinforces what Joel Spolsky found when he ran a similar test a couple of years ago, and also one of the golden rules about working on performance - your assumptions will be wrong so measurement is king. Amen.

Reading Good to Great by Jim Collins

This book was recommended to me over a year ago but I only now have found the time to get myself a copy and read it.

Good to Great claims to be a fact-based investigation into what factors made a previously average-performing company change into one that significantly outperformed both the market and its competitors. I have only read the first fifty pages of the book so far, so can’t claim to either agree or disagree with just how well the evidence supports the book’s claims, but I think there are a couple of points that are worth highlighting based upon what I have read so far.

Firstly, of the eleven companies that meet that author’s criteria for having transformed from “good” to “great”, Fannie Mae and Gillette are the only two I have heard of. This isn’t good news as we all know how well Fannie Mae turned out. Given the book was written in 2001 and Fannie Mae’s spectacular collapse happened in 2008, it’s easy to snipe with the benefit of hindsight, but it can’t help tempering my view of just how “great” these selected companies really are. Jim Collins’s approach appears to have been to measure greatness by the value of a company’s stock returns, and I agree that a data-based metric of some kind is needed to make this comparative study worthwhile, but it is worth bearing in mind that there is much, much more to how great a company is than its performance on the stock market.

Secondly, one of the conclusions drawn is that they found no correlation between executive renumeration and company performance. That’s including performance and target-related incentives which is one of my own bug-bears. I have always found it mildly insulting to be offered a target-related bonus. They always seem to be used as a way modifying behaviour and, I think, entirely miss the point. Yes, money is the reason why we turn up for work but it’s my own job satisfaction that is the single most important goal once I’m there. Bonuses don’t typically make the difference between an income you’re happy with and one you’re not (would 10-15% of your salary really make that much difference, given that you can’t budget for it?). A bonus attached to a specific goal really doesn’t make me want to achieve that goal any more than I did already and, if that goal is in conflict with my perception of job satisfaction (doing the right thing for the job at hand), then there is something wrong here already. Either my understanding of what doing a good job looks like is wrong, or the incentivised target is wrong. Just setting the incentive doesn’t fix this - if I thought achieving it was part of doing a good job, then I would be doing everything I could to make it happen out of a need for job satisfaction alone. I have thought for a long time now that if you can pay people well to do a good job, then that’s exactly what you should do. You can then reasonably expect people to do a good job for you based upon mutual respect. It’s this mutual respect that I think is eroded by performance-based bonuses.

Given this is from the first fifty pages alone, there’s evidently quite a lot to digest in this one book. I’ll undoubtedly write more as I get through the book.

Collins, Jim Good to great: why some companies make the leap… and others don’t Random House Business Books 2001 ISBN 9780712676090

Mocking iPhone Development

So now that I have ordinary-old unit testing is up and running, the next step is to get a mocking library in place so that I can simulate an external library’s behaviour under different scenarios.

OCMock seems to be the library of choice and the good news is that it appears to have a pretty neat API. The bad news comes when you want to, er, use it. I want to have the OCMock library only linked to my testing targets, which ought to be straight-forward enough, but following OCMock’s instructions rigorously only led me to pain. Here is a dead-simple test:

(LogicTests.m) download
1
2
3
4
5
6
7
8
9
10
11
12
#import "LogicTests.h"
#import <OCMock/OCMock.h>

@implementation LogicTests

- (void)testAcceptsStubbedMethod {
    id mock = [OCMockObject mockForClass: [NSString class]];
    [[mock stub] lowercaseString];
    [mock lowercaseString];
}

@end

And the error that I get when trying to build the LogicTests target:

What’s going on here? XCode must recognise that the library is imported OK because it has OCMockObject correctly syntax highlighted, but the build fails at the linking stage. After much googling, Vincent Daubry appears to have the answer. It seems that there is some sort of incompatibility introduced between XCode 3.2.3, iOS4 and OCMock. Given that I am running XCode 3.2.5, iOS4 and OCMock 1.7, it felt likely that I had the same problem. Sure enough, replacing OCMock 1.7 with the latest version from its subversion repository did the trick. Annoyingly, 1.7 was released after Vincent’s post and I made the mistake of assuming that this more recent release would include the fix. Apparently not.

Having confirmed that having built OCMock from its source myself and using that rather than the released version 1.7 hadn’t broken any of the other project configurations led me to discover the next issue. Running the application on my device no longer works!

This appears to be because I made the application target depend upon the logic testing target, as suggested by Apple’s own documentation. This has the immense benefit that the tests are always run before running the application and ensures that the tests will be run regularly. I really can’t be bothered right now to work out how to resolve this1, so I will just have to make do with making my application target independent of my testing target, and make sure I remember to run my tests myself.


[1] My guess is that something along the lines of making the project’s Library Search Paths use the $(CONFIGURATION)$(EFFECTIVE_PLATFORM_NAME) variables to link to specific builds of OCMock ought to do the trick. I still don’t want my application to include references to its testing dependencies though, which this approach doesn’t solve.

Unit Testing Outside My Comfort Zone

I have decided to have a go at writing an app for the iPhone with my new-found free time. It is a platform I haven’t written for before so it’s a nice opportunity to delve into some technologies that I am not very familiar with. First on the list is choosing which language/environment to write my application in. Given the app I have in mind will need to talk to a physical sensor plugged in to the dock connector, I am going to need access to native APIs. So, after briefly flirting with Appcelerator’s Titanium and discounting MonoTouch as I don’t know C#/.NET either, I have settled upon getting to grips with Objective-C and XCode.

I have not written anything in C/C++ for over ten years, which means I have essentially forgotten what little I knew. That’s not a problem, so the best place to start I thought would be Apple’s developer documentation. Let’s start with the good news. These documents appear to be put together really quite well. They’re structured for people new to both iOS and Objective-C development, as well as having links off to XCode docs in relevant places. This is a good thing. Their hand-holding tutorials have got me up to speed with at least getting a bare-bones single-screen application up and working and the language tutorial has given me a sense of the key differences between Objective-C and Java (ugh, memory management).

What is astonishing, though, is the way that unit testing seems to be an afterthought across the board. Of the three Objective-C books I flicked through in Foyles the other day, not one of them even mentioned unit testing. That is including a book specifically targeting Java programmers looking to convert their existing knowledge. Apple’s own iOS Development Guide at least covers unit testing in chapter 7 of 10 and there is a small XCode Unit Testing Guide in Apple’s online documentation. These are OK as far as they go, and they have at least got me as far as getting my first tests up and running, but none of this suggests that unit testing is anything like as ingrained in the development process as it is with my Java experience. There certainly doesn’t appear to be any expectation that you’ll be doing anything as silly as writing your tests first.

XCode itself underlines this. Yes, unit testing is possible, but it just doesn’t feel as ingrained or as natural as with either Eclipse or IDEA. Take this as an example. Here is the first failing test, exactly as prescribed in chapter seven of the iOS Development Guide.

Wait, two errors? But I’ve only written one test! That’s because XCode is reporting the return status of the shell command as a failure, too. Have a look at a fragment of the expanded output from the first error.

Even a passing test suite is a little disappointing - you need to drill down into parsed console output to see what went on.

Is it just me, or is that a little dumb? It is what you would put up with in a third-party plugin that was thrown together quickly, or for a little-known tool, but for something as fundamental as unit tests? Really?

Time to Move On

After four and a half years working for BioMed Central, I have decided it’s time to move on and try something new.

Quite what form of new that will be, I’m not entirely sure. One of the reasons for leaving is that the job took up so much head room that there really wasn’t any time left to think about other things. My trombone playing has suffered horribly over the past couple of years so this is a golden opportunity to put that right again. Also, it’s a good time to get in to some of the technologies and ideas that I didn’t have the chance to explore whilst working full time - either due to restrictions during the day or lack of energy to pursue outside work hours.

I’m certainly not going to retire from being a techie or turn this into some kind of musical retreat - concentrating only on one side I think would drive me nuts.

The idea of working for a consultancy does seem very appealing - primarily the breadth of experience gained by going in to many different organisations and working with many different people. Contracting also appeals for mostly the same reasons - there’s even the possibility of being able to take significant blocks of time out between contracts to help the life/work balance thing. For all that, though, the idea of starting up something of my own is most tempting.

So how about this for the perfect Christmas gift - why not buy the one you love some extraordinarily expensive IT consultancy? Nothing says you care more than getting someone in to tell them how to run their programming team… surely?

More seriously, come the new year I will be open to offers of work, both musically and technically. Drop me a line either through this site or to enquiries [at] odum.co.uk.