Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
When it comes to assessing the risk of a code base, it’s not a single bullet magic number, and it’s not a simple “go/no-go” traffic light. Risk is multi-dimensional and multi-variant, and it’s measured differently for different organizations.
You probably already know where the high-risk or “bad” parts of the code are – they are the parts of the code you are always changing, little tweaks here and there to fix little issues that seem innocuous in themselves but typically represent layering of features on top of poor design. This is why making changes to existing code is the leading cause of introducing defects into an application.
But we also know that change is constant. You never implement everything completely or correct the first time, but at the same time, as you layer on top of the existing code, knowledge of each use-case and scenario is lost, complexity increases, and the code becomes more and more risky. It’s these changes that provide the key to applying context to risk.
Just as important as the visibility into risk itself is understanding how to deal with it – how to prioritize remediation actions to achieve an ‘acceptable level of risk’ while minimizing the impact on team velocity. This post looks at just that: how to assess the risk of code changes and how to efficiently prioritize and mitigate the risk.
Risk isn’t a single number or a project level ‘traffic light’ (although we do use the easily-associated traffic light colors in our UI), it’s a categorization of the code-base and guidance on where real, and potential, problems exist. See below:
An example of a pie chart from Parasoft’s Process Intelligence Engine that shows the proportion of high, medium and low risk code.
The categorization of risk is both multi-dimensional and multi-variant – you have to bring together quality metrics from techniques such as static analysis, metrics, code coverage, and testing to truly understand it. No single one of these techniques gives you the value for a specific dimension, but rather gives you a value for a formula. For example, code coverage isn’t a good number to use on its own because you could have 100% coverage but only a small number of tests doing anything meaningful – you need to think about what you are using code coverage to tell you (i.e. “how well is my code tested?”) and augment that with more data to get a more meaningful analysis.
An example of a risky code change bubble chart illustrating where the highest risks lie. (Bubbles can be expanded to see the metrics driving the categorizations.)
The bubble chart above illustrates the categorization of risk based on two dimensions (also shown in the chart below):
Code that has been poorly tested (i.e. higher Test Deficit) is categorized as high risk (red), while code that is both well tested and well-constructed (i.e. lower Maintenance Burden) is categorized as low risk (green).
During the heat of development, your code-base is in a constant state of flux and every single line of code changed presents an unknown risk. Will it break a fundamental feature? Does it introduce a security flaw? The less information, the greater the risk. In previous posts, we discuss the impact of change on testing and how code coverage needs to be used intelligently to predict where testing resources need to focus. However, even with increased coverage and testing, there’s still additional risk that accumulates over time.
The change in the codebase gives us the third, and most important, risk dimension: time. Not time in the traditional sense, but time as it relates to the builds and the changes between them. Focusing on the parts of the code base that have changed between builds give us the ability to concentrate on addressing the code that is both the highest risk and most relevant, as the team is currently working in this part of the codebase.
Reused and legacy code already has its own burden, particularly for security. Each submitted or modified line of code adds to this debt if there isn’t adequate checks to maintain or improve the quality baseline. To get out of this debt, like any debt, requires focus and a commitment to reduction. Also, like any debt, how does one know how to save unless you know where money is being spent?
Once you have identified the code with the highest risk and highest priority, you need to also consider the amount of work needed to mitigate the risk. This is the fourth and final dimension: Quality Debt. In the bubble chart above, Quality Debt is represented by the size of the bubble – the bigger the bubble, the more known issues that need to be addressed. In our example, the Quality Debt is a combination of high-severity static analysis violations (including violations of set thresholds for code metrics) and test failures, normalized by the number of Logical Lines of Code (see Figure 3).
This aggregation of outstanding quality tasks gives guidance on the relative amount of work required to reduce the risk of the code.
Not every organization is going to follow the same quality practices or agree on what factors to take into consideration when calculating the dimensions. You need to be able to configure and create your own definition of risk.
The example in this blog is available for users on the Parasoft Marketplace, enabling you to use it out of the box, extending and modifying to meet your specific needs. Starting with the example, you can customize the static analysis, metrics thresholds, and risk categorizations to suit your organization.
Balancing budgets, schedules, and quality goals with adequate security measure while satisfying customers is a tall order with risks at every turn. However, automation of quality practices and process intelligence helps guide you to where resources are best spent. Understanding where the risk lies and how each code change impacts your baseline quality and security, reduces many unknowns in the development equation. Quality and security debt can be beaten with the right focus.
VP of Products at Parasoft, Mark is responsible for ensuring that Parasoft solutions deliver real value to the organizations adopting them. Mark has been with Parasoft since 2004, working with a broad cross-section of Global 2000 customers, from specific technology implementations to broader SDLC process improvement initiatives.