Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
To clarify, by “noise” I mean tests that constantly fail, but you know (think) it’s ok anyway, so you just let them be. Or tests that sometimes fail and sometimes work, but no one has ever bothered to figure out or fix them. And then there are tests that are legitimately failing because the code has changed and the test needs to be updated. All of this noise is just screaming out for our attention, but the catch 22 is that the more noise there is, the less likely we are to do anything meaningful about it.
But guess what? Somewhere in that noise of “failed but OK” tests are some real problems that you wish you knew about. Think of it like trying to use a spell checker. If you don’t keep up on it, you’ll get all kinds of things you don’t care about, like special industry words, names, etc., that aren’t real spelling problems. But somewhere hiding in that mess are the embarrassing mistakes you actually made – silly misspelled words that you want out of there. And of course, there are tons of errant spelling errors across the globe – but unlike with your software, there’s not a lot of inherent risk there, just a little embarrassment.
And yet, unit test suites are generally in that same state. Lots of noisy results that we get used to seeing and ignoring, unfortunately hide real results that we need to know and understand. In many organizations, to solve this someone schedules a sprint to clean up the test suite every now and then, from a couple of months apart to even a couple of years. A large amount of time is spent getting the suite as clean as humanly possible, but inevitably the problem comes right back – and more quickly than you’d expect. This creates a negative feedback loop – no one wants to clean the tests because they think they’ll just be noisy again the next time.
The answer is to take a more functional approach – one that removes tedious, useless cleanup sprints and avoids noisy test suites from the beginning.
To do so, it’s important to understand what it means when a unit test fails. It boils down to three reasons, with simple fixes:
Now. You might be thinking – what if a ton of my test cases fit into the that third category? How is this any help? So let’s break that down.
The reasons for the noise usually come down to a few basic problems: bad tests, fragile tests, or poor assertions. Bad tests are tests that don’t do their job properly. Either they’re testing more than they should, or they’re hanging on data that is inconsistent or subject to change based on external conditions.
To minimize the noise, make sure that for each test that’s giving you problems (or better yet all your tests), you have a good answer to these two simple questions:
If for any test, you don’t have a reasonable answer to both of these questions, it needs improvement.
Fragile tests are those that are easy to break. Again, this is often a symptom of lazy assertions – simply checking things because they can be checked doesn’t mean they should be checked. Each assertion should have real meaning pertaining to the requirement that the code being tested fulfills. Common culprits include date/time sensitive assertions, OS dependencies, filename/path dependencies, 3rd party software installations, partner APIs, etc. Make sure you’re only asserting what you minimally need to in order to have a good test, and make sure that everything you’ll need for the test is part of your source control and build system.
Other bad assertions are those that are either constantly in a failed state, but you don’t mind releasing anyway (“Oh, thooose are ok, don’t worry about it”), or those that are in a constantly changing state (“It was fine before, and yesterday it was failing, but today it’s fine!!”). If the code is in flux, it might be ok to have constantly-changing results for a short time, but in the long term, it should be unacceptable. You need to understand why the test outcome is changing all the time, or certainly why you think it’s ok to fail and still release. Doing peer review on your unit tests, including the assertions, will go a long way to fixing this problem permanently. (An extra benefit of peer review? It’s much easier to survive if you’re in a compliance environment where tests are part of mandated oversight.)
Assessing broken tests is truly a great place to do most of your cleanup. I’d challenge you to look hard at tests that have been failing for months or even years. Ask yourself if they’re really adding value. Remember, you’re ignoring the results anyway, so honestly what good are they? Removing tests you ignore will free you to focus on tests that matter, and actually improve your overall quality.
And so it becomes fairly simple (although it might take an initial investment of time). To cleanup, simply observe the following best practices:
And of course, don’t forget to use automation to do the tedious work so that the time you do spend on writing tests is more productive, allowing you to create tests that are less noisy.
Taking advantage of automated software testing helps make unit testing tasks less tedious. If you can let automation do the simple tedious parts (that computers are good at), it frees you up to do the things that require actual human intelligence (which you are good at). For example, let automation create the first working pass of your xUnit test cases (simple code that becomes very tedious to do). If you let a tool generate your getter/setter test methods automatically, you can save tons of time that you can use for other, more interesting things.
When we get more sophisticated with test automation, tools can help even further, doing some of the trickier parts of unit testing, such as creating and configuring stubs and mocks. The more you take advantage of automation, the less time unit testing will take — and it will be a lot less boring as well. If you’re using java, take a look at the Parasoft unit testing solution. It does all of these things, and a lot more, making unit testing not just easier, but way more enjoyable.
Arthur has been involved in software security and test automation at Parasoft for over 25 years, helping research new methods and techniques (including 5 patents) while helping clients improve their software practices.