Those who have seen my recent presentation on Test-Driven Development may be curious as to why all of my examples used services and DAOs that were defined directly as classes, instead of defining interfaces first and then having separate implementation classes.
In recent years I’ve shifted away from the use of Java interfaces when defining the interface into a service or DAO. Instead, I often just make it that the actual implementation is the base class, and that if anybody wants to stub it up they just subclass it. It’s important to note that I still think carefully about what the ‘interface’ to the base class is. In fact, I always start by just writing method stubs that throw an UnsupportedOperationException – something that you’ll see in the presentation. I then start writing tests against the class, fleshing it out from there.
The prime benefit of this approach is that the code is easier to navigate. If you navigate to a class or method in Eclipse, it’ll go straight to the implementation. Having spent years maintaining large codebases with a zillion interfaces I have found this to be very useful. You no longer hit a dead-end when you navigate into an interface and have to look up the implementers yourself. There’s also the benefit that instead of every service or DAO having at least two classes – an interface and an implementation – you have one class. The codebase becomes less bloated.
Dependency injection is still possible. Furthermore, if I want to mock the class using a mocking framework, frameworks like jMock and EasyMock can also dynamically generate mock subclasses using CGLIB. I’ve even found that Spring is able to use CGLIB to generate transaction or security proxies for these classes. The only stipulation is that each of your classes must have a default constructor, which beats the heck out of having to have two classes.
However, there are a couple of limitations that I have experienced or could imagine might be a problem:
- If you alter a method that constitutes the ‘interface’ to a base class, any stub subclasses that over-ride the original version of the method (for example, manually coded stubs) won’t automatically break. You need to make sure that you adjust them, or they’ll just be treated as an additional method and not an override. I encountered this on a recent project where we had a couple of developers who were new to stubs and mocks working on an evolving service layer. Somebody would change a method signature on the service base class, not realizing that somebody else had overridden the original method definition in a subclass that was a stub. Tests would start failing and it would take a while to figure out what was going on. To be perfectly honest, in that instance we shifted back to using separate interfaces to avoid the confusion.
- It’s potentially open to abuse, with people just diving into implementation without thinking clearly about their interfaces. I haven’t seen this in practice though, especially if you adopt a TDD approach.
- Invocation on mocks and proxies may be fractionally slower. However, before you get too excited about this I want to emphasise that we’re probably talking microseconds here, which would only be noticable for your tests if they were calling DAOs and services thousands of times…which they’re probably not going to do.
If you’re new to mocking frameworks, dependency injection or Spring, I probably wouldn’t adopt this approach in the first instance – you’ll already have enough to get your head around. However, if you’re more experienced, it may be worth a look. And even if you don’t start using this technique straight away, it’s worth knowing that you can do it, rather than just blindly creating interfaces and implementations for every service and DAO that you ever write. Think of it as another tool in your toolbox of techniques for creating more concise, easy-to-follow code.