Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
The desire to adopt agile typically comes from within development. After all, agile was created by developers for developers. But without a robust strategy for agile quality that effectively meets the demands of the business, agile teams will remain benched.
To avoid this, you can extend well-known agile quality practices to ensure that your software satisfies business needs effectively and efficiently. Here’s how:
In agile development, everyone on the team, from the developer to the product manager, must be acutely aware of quality throughout the process, with quality concerns an integral part of their daily workflow and multiple quality tasks being performed simultaneously. This allows for faster discovery and remediation of defects…reducing the time, effort, and cost required to eliminate each defect.
For instance, at any given point, you might have the following tasks going on at the same time:
To fit quality into the various team members’ daily activities, you can follow this task distribution:
|Development Managers / Scrum Masters|| |
|QA Testers|| |
|Product Managers|| |
Policy is not traditionally associated with agile development testing, and many pure “Agilists” worry that the whole notion of policy contradicts the collaborative, lightweight nature of agile processes. Yet we have found that many teams working with agile development methodologies actually appreciate having clear expectations regarding how code should be written and tested. With expectations clearly defined, the team does not need to waste time trying to figure out exactly what is expected when — or constantly reworking the code to remedy misunderstandings.
You can start off by implementing a policy that simply formalizes the coding and quality practices that the team is already following. Then, working from that baseline, you can incrementally introduce new policies that the team has discussed and agreed to follow.
Possible policy modifications can then be discussed in daily stand-up meetings, or during code reviews. This allows team members to provide valuable feedback regarding policy refinements (e.g., fine-tuning a static analysis rule to allow certain exceptions) and extensions (e.g., requiring additional test coverage on a module that QA reports as being especially buggy).
In addition to ensuring that the entire team is on board, it’s also essential to ensure that the policy enforcement serves as a guide, not an intrusion. Ideally, an automated infrastructure checks compliance automatically in the background and works invisibly unless the agreed-upon expectations are not met. At that point, it figures out which team member should be notified, and reminds him or her to perform the tasks required to meet the agreed-upon expectations.
Essentially, the automated infrastructure operates like an EKG system that is hooked up to a hospital patient, constantly measuring electrical activity. If everything is fine, it runs inconspicuously in the background. But if the readings start to flatline, alarms are sounded and people understand that action is needed.
If defects stemming from poorly-implemented code are constantly being passed on to QA, this ultimately results in:
Many such defects could be caught by completely-automated development testing — testing that can be performed on the integration server or from the desktop with a single click.
The starting point is having an effective coding policy and implementing it with static code analysis. Static code analysis checks for known anti-patterns in source code constructs and reports each occurrence of a match. It can be performed quickly and is guaranteed to find all cases of code that match a pattern.
Defensive coding can make code immune to many categories of defects, including many defects that are extremely difficult to detect in testing, by eliminating their root causes.
For instance, you can prevent many causes of:
Not every type of defect can be prevented by static code analysis. But every defect that you can prevent is one less defect that slows the team’s iteration. QA is relieved from having to find and document the defect. Development is relieved from having to stop whatever they are doing, refresh their memory of the problematic code, fix the code, and verify that the problem is solved. And QA does not have to re-test the developer’s modification.
To find the types of defects that are beyond the scope of static analysis, try the following techniques—both of which require no additional manual effort:
In addition, we strongly recommend peer code review, which does require effort, but is the only effective way to expose high-level defects related to logic, design, etc. Tip #10 will discuss some ways to use automation to streamline code reviews.
When your developer or QA testing efforts expose defects that made it past your team’s quality policy, don’t just fix that one defect and move on.
This is like an automobile manufacturer noticing that a car came off the assembly line with a faulty brake installation, fixing that one car’s brakes, and hoping that the problem doesn’t happen again. A safer approach—one that would save a lot of time, hassle, and grief in the long run—would be to examine the assembly line, determine how that problem got introduced, then fix its root cause.
In terms of the software development process, this approach involves working with development and QA to:
In many cases, a coding policy change will do the trick: by enabling or reconfiguring certain static analysis rules, you can find all other instances of existing code that is prone to the same type of defect—and you can also ensure that new or modified code steers clear of the known pitfall. Often, runtime error detection and data flow static code analyzers even point you to the precise static analysis rules that you can follow to prevent the specific defects they expose.
Other times, the resolution might involve changing the way that code is developed, tested, or adding to a list of items to review during manual code inspections.
Ultimately, this immunizes your software development testing process against many of your most common software defects.
Since overly-complex code has time and time again been proven to be slower, more difficult, and more risky to update and extend than simpler code, it is helpful to zero in on complex code—say, any class or method whose Cyclomatic Complexity is 10 or higher. This code might work fine right now, but it could cause the team considerable grief down the road when what appeared to be a simple functionality extension ends up requiring a large-scale rewrite.
Using the metrics calculations for one of your easy-to-maintain projects as a guideline, set realistic boundaries and thresholds for selected metrics, then ensure that team members are alerted when metrics are outside of the prescribed range. This will reduce the time and effort required to modify and extend code in subsequent iterations. It will also help new developers get up to speed faster.
If you’re already practicing test-driven development (TDD) and continuous automated regression testing, you’re got a great foundation for determining when modifications change or break your previously-validated functionality. Yet, if you really want to rest assured that your modifications don’t change or break some application behavior that you did not explicitly test with your TDD test cases, more extensive regression testing is needed.
One option is to spend lots of time writing many more test cases. But who has time for that? A much simpler approach is to automatically generate a battery of regression tests that will help you check application behavior:
The result is an easy and efficient way to determine when and how your code modifications impact your code—even the parts of the application that are not covered by the functional / TDD tests that you wrote manually.
The only work required is to keep the test suite in sync with application changes. When developers arrive at work each morning, they review and respond to any test failures reported for their code. In doing so, they will either address functional defects in their code or update the test suite to reflect the correct behavior of the code.
The systems that agile teams are building today are growing in size and complexity. As a result. developers often struggle with how to test the specific components they worked on because these components operate as part of complicated systems—they are connected to the outside world, staging systems are difficult to establish and maintain, each test requires considerable setup, and so on. It might be feasible to run a test once, but you really need to run all your team’s tests daily so that you’re immediately alerted in case your constant modifications end up breaking previously-verified functionality.
Test case tracing allows the team to continuously test parts of the application without the hassles of dealing with the complicated system. Using a tracing technology like Parasoft Tracer, you can create unit test cases that capture the functionality specified in the requirements.
From the application GUI or using a SOA or Web test client, you just exercise your use cases while tracing is enabled. As the use case is executed, the technology monitors all the objects that are created, all the data that comes in and goes out, and then it creates you a unit test that represents this operation—and even sets up the proper initial conditions.
You can then take this unit test case and execute it on a separate machine, away from the original system. This means that you can use a single machine—such as the continuous integration server or even a standard developer desktop—to reproduce the behavior of a complicated system during your verification procedure. Add all of these tests to your regression test suite, run it continuously, and you’ll immediately be alerted if this functionality that you captured is impacted by your code modifications.
Essentially, this helps you address the complexity of today’s systems in two ways:
Agile teams commonly work on distributed heterogeneous applications (e.g., SOA, Web, Cloud) where one or more component is incomplete, evolving, unstable, inaccessible, or otherwise unavailable for testing. Agile methodologies call for continuous and automated (when possible) regression testing, but how can you run automated end-to-end functional tests nightly when parts of the system under test are not readily available? If you delay testing until everything is ready and available for testing, you risk slipping into a waterfall-like paradigm.
One way to tackle this challenge is to emulate the behavior of the necessary components. Essentially, you construct and deploy virtualized assets that emulate (virtualize) application behavior, performance, and data. This is achieved by applying a concept known as Service Virtualization.
Service Virtualization was conceived to emulate the behavior of unavailable and evolving web services, then evolved beyond the canonical service protocols to support multiple message types and protocols—JDBC, MQ, JMS, and more. This extended service virtualization radically reduces the configuration time, hardware overhead, and data management efforts involved in standing up and managing a realistic and sustainable dev/test environment.
The ideal service virtualization should aim for virtualized assets that can:
With such virtualized assets, you can replace dependent systems, methods, or components that would otherwise inhibit the team’s ability to effectively validate the requirements that need to be tested or to verify that detected problems have been effectively resolved.
For instance, consider a services-based account provisioning system that’s losing orders. To solve this problem, the development team has to test the fixed application in a safe sandbox that emulates the transaction flows of the production environment—or risk breaking the system upon redeploying the application. Using emulation, the team can exercise their services in context, without impacting partners’ normal business transactions.
This concept isn’t new, but it’s becoming increasingly important as enterprise systems continue to become more distributed and heterogeneous. Moreover, this increased complexity makes it necessary to expand the concept of emulation to extend beyond services and also include other essential system components—for example, the database, the Web interface, the broker, and so forth—and to be able to emulate the various protocols that commonly co-exist across such systems.
Although agile methodologies recommend automated testing whenever possible, some degree of manual testing is always required. Each time the application is updated, the QA team typically either repeats all manual tests, or spends a considerable amount of time trying to figure out which of those test cases might be related to the application part that changed. Either way, much time is wasted.
With change-based testing, the team can instantly see which requirements are impacted by recent changes and which tests need to be executed again. This saves a tremendous amount of QA time and resources that could be better applied to value-added tasks.
To enable change-based testing, requirements are automatically correlated with tests, source code, and development/testing tasks. With these correlations in place, the current level of verification for each requirement or task (including task pass/fail status and coverage) can be assessed at any time by back tracing to all associated tests.
Moreover, the team can instantly gain an objective assessment of which requirements are actually working as expected, which defects are resolved, and which requirements and defects still need testing. This real-time visibility into true requirement and defect status helps you prevent late-cycle surprises that threaten to derail schedules and budgets.
Stand-up meetings are an important part of agile processes—and are an ideal forum for keeping quality issues at the forefront of development. To keep these meetings at the recommended 15-minute length (and still free up some time to cover quality), it’s important to streamline what’s covered and take more involved issues “offline.”
Two strategies that help you optimize these meetings are:
Many of the most boring and tedious aspects of stand up meetings can be eliminated with automated task distribution and monitoring.
Using a software development management system such as Parasoft DTP, the manager or scrum master converts requirements and issues into actionable, measurable work tasks which are then automatically distributed directly to the responsible developer’s IDE (according to manual assignments or predefined guidelines).
When a developer is ready to start working on an assigned task, he or she marks it as “in progress,” works on the task as normal, then marks it as “done” when it’s completed. Without interrupting the developer’s workflow, you can automatically track which files are related to the given task and how much time is spent working on these files. Real-time status information is always available in the system.
The tracking is key for evaluating the team’s current progress and capacity in preparation for the stand-up meetings. It also keeps QA well informed about what’s ready for testing.
Code review typically involves long tedious meetings that require a tremendous amount of preparation. An automated code review system can help the team streamline code reviews by automatically:
The team is then free to focus on the actual peer review: a creative process that cannot (and should not) be automated.
Code reviews do not necessarily need to cover every line of code. For instance, some teams might prefer to review only the most dangerous segments of their code (e.g., the most security-sensitive or deadlock-prone areas of the code base). The expectations regarding code review should be discussed with the team and set in the quality policy.
Parasoft’s industry-leading automated software testing tools support the entire software development process, from when the developer writes the first line of code all the way through unit and functional testing, to performance and security testing, leveraging simulated test environments along the way.