Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
This post is the first of a few I’ll write to help new users adopt static analysis tools into their development process. Getting started can be tricky if you haven’t taken some time at first to make sure you’ve identified the right strategies to take for your project.
As a Solution Architect here at Parasoft, we get a lot of people looking for help in this area, so know that you’re not alone! And if you want more information, you can download and read this full guide that I helped put together.
I’m assuming that your static analysis tools are installed and any initial configuration has been set up. After that, getting started can be tricky if you haven’t taken some time at first to make sure you’ve identified the right strategies to take for your project. Here, what I mean by “getting started” is understanding better the general approach of integrating static analysis into an existing project and how to increase the return on investment of static analysis over time.
In simple terms, static analysis is the process of examining source and binary code without execution, usually for the purposes of finding bugs or evaluating quality. Unlike dynamic analysis (e.g. Parasoft Insure++), which requires a running program to work, static analysis can be run on source code without the need for an executable.
This means static analysis can be used on partially-complete code, libraries, and third-party source code. Static analysis is accessible to the developer, to be used as code is being written or modified, or to be applied on any arbitrary code base.
In the application security domain, static code analysis goes by the term Static Application Security Testing (SAST). Static analysis can support security vulnerability detection, alongside bug detection, quality metrics, and coding standard conformance.
Static analysis tools are also mandated (or in some cases, “highly recommended”) by functional safety standards like ISO 26262 or EN 50128, for their ability to detect hard-to-find defects and improve security of software. This comes back to security, of course, as well, as static analysis tools also help software teams conform to coding standards used primarily to validate secure coding, such as CERT or even MISRA.
A great thing about static analysis tools is that they can be introduced and used at any stage of a project, effective even if a project is incomplete and partially coded. The biggest challenge with introducing static analysis is that a large amount of code can produce a large number of warnings. Therefore, the focus when integrating static analysis into a project, should be on getting the team productive as soon as possible, and minimizing the opportunity for the team to get overwhelmed by all the static analysis warnings. This is not to diminish the importance of these warnings, but most developers don’t have the luxury of fixing existing or legacy code, at least not immediately.
The focus should be integrating the tools into everyday processes so that access and usability is maximized, and then deal with the most critical bugs and security vulnerabilities. Once the team becomes more proficient, you can then focus on optimizing the tools and processes to increase the return on investment.
To get the most out of static analysis, it’s important to understand the end goal. If the goal is better security, for example, that will shape the focus of analysis and remediation, or if the goal is complying a coding standard such as MISRA C, the focus will become satisfying the coding standard and proving it to certification entities as required.
When first adopting static analysis, it’s easy to fall into the trap that more is better (i.e. more analysis and more warnings means you’re getting the most value out of the tool). This is a common trap. Instead, stay focused on the end goal.
If security is the focus, keep the focus on improving security and reduce the distraction of other types of warnings. Of course critical bugs are always important to track down, but they shouldn’t distract from the main goal. Over time, as the team becomes more proficient, you will be able to incorporate other secondary goals, such as improving overall quality or enforcing coding standards. As static analysis becomes part of each developer’s daily routine, they will be able to analyze results more quickly and fix bugs more efficiently. At this time, the secondary goals will more effectively be achieved, instead of simply being overwhelming.
Once you understand your main goal to focus on, you need to identify the maturity of the product under development, as it impacts the way static analysis can be adopted. Consider the major stages of development below and identify where your project fits in, so you can understand which adoption approach is best for you.
The most common scenario is a software organization that decides to use static analysis, and rolls it out to their current projects.
Each project may choose to adopt the tools at the start of a sprint or at the beginning of a major new feature update. Realistically, software teams are always working – even as one product is “finished,” another version or variant is underway. The key aspect of this adoption scenario is that there is a significant body of existing code and new code being developed daily. The recommended approach to integration is called “a line in the sand” approach, which means improving new code as it’s developed, while deferring less critical warnings as technical debt. We’ll talk about this more in a moment.
Adopting static analysis for a mature product may have different goals than a project still under development. This is a product that is in the elder years of the software development lifecycle, in which little new code is being written, only to fix lingering bugs and security vulnerabilities. The primary approach to adopting static analysis for these projects is called “acknowledge and defer.” In this approach, since little new code is being developed, all of the discovered bugs and security vulnerabilities are added to the existing technical debt.
Although it’s not often that software teams get to have a fresh start, a new product and project is the ideal point to integrate new tools and techniques into the development process.
In these projects, little existing code specific to the project exists, but it still may rely on third-party and open-source software. Developers can integrate static analysis in their development environments from the start, ensuring a high standard of quality as code is being written. This allows for the adoption of coding standards and ensuring critical static analysis warnings are dealt with as they arise, thus adding less bugs and vulnerabilities to the technical debt pile. The approach to adoption in this case is aptly named “greenfield.”
Once a static analysis tool has been installed into the project, there is usually a fairly lengthy report of violations and warnings reported by the tool. This can be overwhelming, especially in large codebases, so how these initial reports are managed directly influences the success of integrating the tool into the project.
Not all warnings are critical, so everything doesn’t need to be dealt with immediately. Learning what to address immediately and what to defer is the key to success. As mentioned above, the maturity and size of the product has a direct influence on approach, outlined below in more detail.
As the name implies, in this approach, developers decide that after the initial analysis, they won’t let any more critical warnings and violations enter the code base. In other words, they make a commitment to analyze each critical warning to decide its veracity, and implement a timely fix, if it’s indeed a bug.
The team may also decide to add critical warnings already discovered in existing code to be added to the list of bugs in their reporting tool. Examples of these types of warnings might be critical security vulnerabilities such SQL injections, or serious memory errors such as buffer overflows. In most cases, the less serious warnings can be deferred for later analysis. You might be thinking, “doesn’t this just add to our technical debt?” And if you are, you’re right! But at this stage, we’re ok with that. Any potential bugs within these warnings were already in the technical debt pile. At least now, they are identified and much easier to fix at a later time.
In the case where a product is already on the market and under maintenance, it is still beneficial to identify any lingering bugs and security vulnerabilities in the code, but it’s not feasible for developers to analyze (let alone fix) all these warnings.
In such a case, it makes sense to look at the top most critical reports and decide a course of action. The rest of the warnings are acknowledged, as in the software team recognizes they exist, but they are mostly deferred for a later time. (This again adds to technical debt of the organization, but as mentioned above, these bugs technically exist already as technical debt.) This approach differs from the line-in-the-sand approach in that after identifying the key warnings, you simply defer the rest, without necessarily any analysis.
A project with little existing code is an ideal starting point for static analysis. In this case, the software teams can investigate all warnings that arise and fix found bugs. Unlike the other approaches, there are only a few warnings to manage, so developers can tackle the additional workload. This is also an ideal time to implement and enforce a coding standard through the tools, since violations can be identified and fixed right within the IDE and before any code is submitted to version control (which you could also do in the other scenarios described here).
The adoption of static analysis in the three major stages of maturity are differentiated by how they deal with the backlog of warnings, as illustrated below:
The adoption of static analysis in the three major stages of maturity: In a greenfield project, most reported warnings are investigated and fixed with little going into the technical debt pile. Projects under development tend to have a backlog of warnings that are mostly deferred with only critical warnings being dealt with, and products under maintenance tend to have most warnings deferred.
One of the main differences between open source or lightweight static analysis tools and commercial advanced static analysis tools is the ability to configure which set of checkers are enabled for the analysis, and filter out reported results based on warning category, file name, severity and other attributes. This helps with the goal of not getting overwhelmed – developers can focus on just the types of warnings that they are interested in, and reduce the amount of information provided at any given time.
There is also a difference worth noting between configuring checkers and filtering results. Although initially it might seem better to limit the number of rules in the global configuration, filtering should often be used instead, to limit the scope of reporting rather than eliminate the checker entirely. If a rule that later turns out to be important is turned off in the configuration, there will be no history in the warning repository, so you won’t be able to find out if the error was introduced by recent changes or already in the code before static analysis was adopted.
I would recommend using the configuration to simply limit the set of rules to those that are foreseeable as useful for the software team. Again, start with the end goal in mind: if improving security is the key goal, it makes sense to enable all security-related rules, disable less important rules, and enable one of the built-in secure coding standards such as CERT C. Then, if you’re using an advanced static analysis solution like Parasoft C/C++test, you can leverage its built-in management tools to deal with the data produced from the static analysis reports and drive future development focus.
Static analysis tools provide software organizations with the ability to detect and track bugs security vulnerabilities without needing to execute code. These tools can be applied to existing, legacy and third party code and provide quality.
The adoption of static analysis varies to some extent based on the maturity of the project. A large body of code does result in numerous warnings. This is completely manageable and the success of adoption depends on how teams decide to tackle the results. Various techniques are introduced for each major maturity level of a project and how these tools can be integrated into the day-to-day workflow for developers, team leads and managers.
In my next post, I’ll talk about integrating static analysis into your daily workflows, so make sure to come back for that! Or subscribe to the blog by filling out your name in the box right below this post, to get notified when it comes out.
As a Solution Architect at Parasoft, Billy helps teams strategize and prioritize as they adopt modern software development and testing practices into their organization.