Notes on Unit Testing

Walking a coworker through the concept of test driven development and unit testing I designed some notes on it. So I thought I’d copy them online for future reference.

TDD steps:
A. Choose a test or suite of tests. Design it/them.
Design mode involves asking the following questions:

1. What do I assume exists? (the Setup phase)
2. What do I test for? (the Assertion(s) phase)
3. What do I reset after? (the Teardown phase)

Designing tests well will save you much headache later down the road. Take my word for it. If not, go through the rest of the steps, hit the headaches and pitfalls then read back the “I told you so” statement I have waiting here for you :)

B. Write the code for the unit test(s) according to the above plan.

The Setup and Teardown phases may be best written first because you are changing then restoring the state of the software system within the setup and teardown phases. This will happen when you need test data present that should not be persisted beyond the scope of the test. You would always want to leave the system clean, i.e. in the exact state it was in before you ran the test.
This is done by ensuring the Teardown phase removes any system changes made in the Setup phase and both are ideal to be written together before the core unit tests in the suite are written.
O, and make sure once you write your Setup, Teardown and test methods that the project actually does compile.
This would almost certainly require you to create “stubs” (i.e. empty or unimplemented) classes for those new objects you intend to be testing in addition to referencing already tested classes and their methods.
Don’t worry that you’re referencing unimplemented classes in your tests just yet, that’s why you’re writing the tests, so that when you get to writing them you know exactly your progress in coding them.

C. Run NUnit (or test suite of choice) and ensure all tests fail.

At this point you may choose to call a coding checkpoint with your team lead/peers to ensure that you have truly written a useful and complete suite of tests, and to fill the gap in ensuring there are no critical tests you may have overlooked that should have been designed and coded in this cycle.

D. Write the code that enables the tests to pass.
This is the meat of things. You know what you’re testing for, so you now implement the logic to make this happen. All those nicely stubbed objects now get substance in your quest to empower them with the logic needed to pass the tests.
Once all your red lights (fails) turn green (pass), you’ve just made a significant step towards completing this development cycle.

E. Refactor your code to make it more efficient.
This is important. Getting all tests to pass is good, but we don’t always do things the right way the first time. Look back at the code that passes each of the test.
Could it be optimised in some way?
Did you take any shortcuts in a moment of weakness/time crunch to make the test pass that you need to correct now to make the code more robust?
Will this code be chewed up by your peers for its lack of pattern usage, adherence to coding standards, complete documentation etc at a code review or revered as a masterpiece in coding and a job well done?

F. Code Review.
This is the often missed/sacrificed step, but it is as important as all others. You’ve gone through the previous steps and done the best job (hopefully) you could getting your code optimised.
You are now ready to have it validated by your peers. Don’t avoid it because you fear getting criticisms, this is good, since it shows you where you have to grow in building better code later down the line for other tests.
And it will obviously lead to better code delivered to the client.
Of course if you’re the one asked to participate in the review, remember the five C’s when giving feedback in these reviews that will make it a good one, i.e. be Caring, Consistent, Clear, Current and Concise with your feedback.

G. Integrate review feedback and be done.
Once you have aggregated your peer feedback, ensure that you have integrated it into your current code set.
Once you have done a proper job checkpointing with your lead during the development process your peer feedback should only be minor tweaks in code. It should not lead to a major rewrite of code, and any time it leads to this you either are not comprehending the feedback given or really did do a half-assed job in the first place moving through the previous steps.
Checkpoint with your lead if you are unsure how integration should occur, or if you find it is taking more time than a minor rewrite should to ensure you are understanding the feedback given properly.

H. Call it done. Move on.
There you go. Once these steps are complete, you’re done, you can move on to developing a new set of tests for another feature set and repeat the cycle.

Leave a reply