Sunday, July 20, 2008

Unit testing - prefer messages

tl;dr version - if your unit test tool lets you associate (informational) messages with your test assertions, use the fuck out of them.  It's great that you're driving towards 100% code coverage.  How much greater will it be in 2 months when someone (probably you) breaks a test and has a useful indicator of what exactly was being exercised rather than trying to puzzle out a simple assertion? 
I'm sadly new to the unit testing game, so I've been learning the wrong way to do things at an astonishing clip, while mistakenly stumbling over things that work by accident every now and then.
I never quite understood the hubbub over unit testing - why do I want to do extra work that doesn't go towards getting a working product out the door?  Now that I'm writing oodles of unit tests, I understand exactly why I want to write them - they save my ass early and often.
Case in point - the object to XML mapper (this isn't NIHS; I genuinely can't use LINQ to XML because the hierarchy the external service produces is not only unpublished but subject to change) I'm writing.  It's been working, but I noticed that it was... how shall I say... less than performant?
So I set out to start refactoring critical sections of the code.  I started by gut - I started taking FxCop up on its suggestion to use IXPathNavigable and knocked bunches of stuffs out.  Minor improvements.
Then I stopped programming by guesswork and profiled a generic run pattern.  Creating objects (with objects creating other objects), updating the persistent XML store, blah blah blah.  Found the genuinely astonishingly slow parts of my code and broke out the chainsaw to fix them up.
For a change, I had a really, really high level of confidence in all the changes that I was making.  Before unit tests, it was just more guesswork as to what I might be breaking outside of the code that I was touching just by looking at it funny and changing this postfix increment to a prefix increment (OK; that's an exaggeration, but you kinda know what I mean).  Now that I've got unit tests in place, it's a whole different story.  I can try things out and see if anything breaks in real time.  If the coverage is good enough, I've got silly confidence that everything's on the up-and-up.  If it isn't, whatever.  Adding a few more tests isn't moving mountains.
But a strange thing happened along the way - unit tests that I'd written a month or two earlier started breaking.  Even stranger, I had no idea what some of them were doing.  Not many of them, but I was clueless about the provenance of some of the tests that I'd written a couple of months earlier.
That sends up red flags for me - there's still value in having those unit tests, but I can recognize that if I don't have a little more context associated with them, they're going to bit rot really, really fast.  I started by putting comments above the tests explaining what they were doing, but that felt kind of unsatisfying.
I find that when I write unit tests, I slip into a lightweight QA state of mind - I think less about the cases that should work and more about the edge cases, the awkward states that I can put my code into to get it to break.  It gives me a chance to stand back and re-examine the code from that stance as well as getting a feel for how easy the class is to use, since I have to instantiate objects (and everything else in its dependency graph) before I can start to test it.
The time that I'm thinking about what the class is doing for me and how to use it lends itself naturally to embedding context in the tests.  Not simple messages like "validating that CountryCode gets populated when the object's hydrated from XML" but "validating that nullable enumerations are being populated properly."
Prefer messages in the unit tests you write.  They'll help you make the most out of your unit tests as you write them and they'll help you understand your unit tests when they break down the road.

No comments: