09 Aug 2007 Unit Testing and The Way of Testivus
Recently Ben posted an entry and presentation on test driven development. An amusing take on this can be found in The Way of Testivus which provides the wisdom of ancient masters on testing. My personal favourite is:
The pupil asked the master programmer:
“When can I stop writing tests?”
The master answered:
“When you stop writing code.”
The pupil asked:
“When do I stop writing code?”
The master answered:
“When you become a manager.”
The pupil trembled and asked:
“When do I become a manager?”
The master answered:
“When you stop writing tests.”
The pupil rushed to write some tests. He left skid marks.
MarkPosted at 09:34h, 10 August
I want to laugh but having interviewed for new staff recently it has been amazing to see how few people can answer questions about testing. Is it becoming a lost art?
ChivalryPosted at 13:55h, 24 September
To test or not to test? That is the question. A question that has been deliberated, argued, questioned, and worst of all become a pheudo religious topic for all those TDD evangelists on the “testing brings your code closer to perfection” crusade.
Well… to those who preach to me; l say one simple thing: “Whatever works for you mate”. If you feel that you’ll deliver code quality by creating a truckload of Unit/Integration tests, well thats good for you. Im sure your clients feel warm and cosy at night knowing they have a stack of expensive tests that cover them from a single rouge change. Oh… and a simple question before l forget; how long do your tests take to run before you can integrate your code? I wonder if your clients know the answer.
I myself, dont participate nor believe in the TDD religious view. Im more interested in delivering value for money.
matthewjPosted at 15:35h, 24 September
I agree that TDD has become religious and I also agree with “whatever works” and being pragmatic about TDD (as with every other technique we use). I know of many developers who find that TDD does work for them.
But – to test or not to test … ?
I hope that you’re not implying that ‘not testing’ is an option – perhaps you meant “to write test code or to not write test code”, with the alternative being manual testing ? And the answer to that should definitely be – only where it is adding value (but you might be surprised how often it does …)
Personal experience on projects is that adding more test code that can be automated in a continuous integration environment saves more time, and much more time than it costs to write. This time is saved in test execution – test are run automatically and regularly – as well as defect resolution – the defects are found earlier and fixed more easily.
To be fair, with a greenfields Java (Ruby, .NET etc) project it is easy to create the tests as you go and get maximum benefits. However with older technology (COBOL, PHP or even legacy Java) the architecture itself may make it difficult or just not cost effective to add in test code. These environments can also be complex to test run, especially if large third party packages are involved – again pragmatism should apply.
How long do our tests take to run before you can integrate code ? Anything from 2-5 minutes is typical – and yes with big complex projects it can get longer. But there are ways that tests can be improved and managed in suites to maximise the benefit without taking too long.
One thing our clients do know – many more defects are being caught *before* the systems hit production, and they like that. A lot.
ChivalryPosted at 11:04h, 02 October
Prompt reply. Thanks for that. To answer your question, no, lm not suggesting that you ignore unit testing; Im no amateur let me assure you of that.
My post was directed at your blog entry, period. Not the derivation of it that you so eagerly provided. I would, to some degree, agree with your assertion that “adding more test code that can be automated in a CI environment saves more time”; thats provided that you’ve measured it right? The lack of measurement would only suggest a theorem and not conclusive evidence as your assertion leads your readers to believe (perhaps foolishly?).
Some things that are a fact. More tests in a CI environment take more time to execute to validate build “health”. More tests take more time to refactor and change. More tests significantly increase you core library upgrade times. More tests mean more money the client needs to pay. Where do you stop? Unit, Integration, Selenium, Stubs, Mocks, whatever it takes to remove our dependency on the container. Sorry, but l’m not sold.
One thing our clients do know – we delivered business functions that are clear of defects which were caught during Development, System and UAT testing. There is still a need for Testers, right?
matthewjPosted at 14:28h, 02 October
“… we delivered business functions that are clear of defects …”
Great ! If you’re happy with your process and you’re getting good results then don’t change anything. Quality code was being written long before anyone thought of JUnit, Mocks and TDD.
“There is still a need for Testers, right?”
Absolutely, TDD fits primarily into the unit testing domain, there is still a need for system / functional testing, UAT, performance and volume testing etc (although some of that can also be automated if it makes sense to do so).
“… provided that you’ve measured it …”
Have I done detailed measurements ? No
Have others ? Yes
Do they support the use of TDD ? Some do and some don’t
However, I don’t believe I’ve ever asserted that TDD (or anything else) is the “one true way”. It’s just one of many techniques available for us to use.
tegiPosted at 10:55h, 16 October
your approach may work for some time for small greenfield projects.
I worked on one pretty big project that did not have unit tests. The reasons of those tests absence were exactly as you described: more time needed to develop, support and run those test and therefore additional cost for client and “we write code clear of defects”.
The project was a failure because System and UAT phases could not be finished. Testers kept finding defects, developers kept fixing them.
There was no way to tell whether raised defects are result of introduced new functionality itself or side effect of another defect fix or new functionality highlighted previously existing defect.
I’m completely convinced that automated unit and integration tests pay back very well as they are real safety net against regression defects.
What framework or frameworks to use to create those tests depends on complexity and structure of particular project.