Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
One of the key tenets of agile is ensuring shippable quality of incremental deliverables while responding to changing requirements. But the challenge of balancing the testing of new functionality while validating correct operation of existing functionality causes many agile development teams to get bogged down in the creation, management, and maintenance of an expanding suite of tests. In the end, it can become very difficult to accelerate agile and be confident of both product quality and security without the proper wealth of information.
Looking at the amount of code exercised during testing is a useful metric in order to understand the level of risk mitigation being performed, but it’s often misused and, at the macro-level, isn’t a good indicator of quality. In this post, I’ll show you how to intelligently use code coverage metrics to focus test efforts where they’re needed most, by understanding where new tests are required. We’ll also get into some of the best practices for creating maintainable test suites.
Code coverage is not “the” number for determining when you have enough tests, but it is “a” number that can be very useful to guide teams on where to focus.
Unfortunately, it is often used incorrectly to manage teams by focusing on the number itself and shooting for a specific percentage against the code base, for instance, using policies like “we must have 80% coverage before we can release” or “the coverage number should be the same or higher than the prior release.”
The problem with this approach is that getting and maintaining a target coverage number is difficult and time consuming in itself, so we blindly work towards the number and no one is taking the time to ask the important questions, such as:
As I discussed in a previous blog, each change in the code base represents an introduction of risk, and understanding the specific impact of each change on existing code is important along with understanding how to mitigate that risk.
By identifying changes in the code base, and using code coverage to correlate tests to those changes, an optimal test plan can be created based on where re-testing is needed (read more about Change-Based Testing here).
But this doesn’t cover all the risk. Obviously, we still need to create tests for the new functionality, but what about the existing/legacy code? Many organizations we talk to have a goal of 60-80% code coverage, but in reality struggle to get above 50%. So there is a good chance that a change to existing code will not be covered by an existing test case. Focusing purely on preserving, or incrementally growing, the macro coverage target doesn’t protect from introducing regressions into legacy functionality that “has been working for years.”
Instead, by looking closer at the coverage details, specific modified lines that have NOT been covered can be quickly identified, so you can focus the team on where new tests need to be created. Furthermore, using traceability between test cases and the specific code they are exercising, you can identify potential test cases that can be duplicated or extended to cover the changes.
By focusing on achieving 100% coverage of the modified code, versus an arbitrary team goal of “80% total coverage,” the team can be much more efficient at mitigating the risk of new code while eliminating factors impacting overall project stability (e.g. modifications to legacy code.)
Measuring this intelligent code coverage is possible using Parasoft DTP’s Modified Code Coverage widget (an analytic “slice” of Parasoft DTP’s Process Intelligence Engine (PIE)). Here, you can easily see the coverage of the code that has been added or changed between two builds (e.g. the current build and a target build of your choosing). For example, figure 1 shows such a widget. In this example, 177 lines of code were added or changed between the two builds and 109 of those lines have been tested, i.e. 61.6%. This means there are 68 lines of new or changed of code are not covered by any test, and therefore have not been validated and represent risk in the code base.
Sitting behind this widget is a modified coverage report. The report provides exact details on what code is missing appropriate tests. This is key information that developers and testers need in order to focus their efforts. Figure 2 shows such a report, in which modified files can be sorted based on the number of changes, or code missing tests, with uncovered modified lines marked in red.
This report answers the question “what tests am I missing?” Based on the information here for each build, a test plan can be created.
Once you have identified the code where you are missing tests, you actually need to get to work and create the tests to fill the gap – but what type of tests do you create? The Testing Pyramid (as evangelised by Martin Fowler and Mike Cohn) outlines how to create an efficient, manageable, and maintainable portfolio of tests. By minimizing brittle UI level tests and focusing on a solid foundation of unit tests (backed up with comprehensive Service/API level tests) you are able to build a testing strategy that is scalable, maintainable, and can be executed continuously.
We are not going to go into the details of creating unit or API tests in this blog, but you can check out my previous blog on unit testing and watch out for an upcoming blog about how Parasoft SOAtest can be used to simplify the creation of API/Service level tests.
Code coverage is an important metric, but it’s often overused and misused, especially when it is used to measure quality. To get more value from code coverage, leverage Parasoft DTP’s analytic intelligence to detect where new tests are needed, mitigate risks, focus your testing, and optimally achieving your quality goals.
VP of Products at Parasoft, Mark is responsible for ensuring that Parasoft solutions deliver real value to the organizations adopting them. Mark has been with Parasoft since 2004, working with a broad cross-section of Global 2000 customers, from specific technology implementations to broader SDLC process improvement initiatives.
: error_log(/www/wwwroot/parasoftchina/wp-content/plugins/spider-analyser/#log/log-2822.txt): failed to open stream: No such file or directory in /www/wwwroot/parasoftchina/wp-content/plugins/spider-analyser/spider.class.php on line 2900