Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>

X
BLOG

DevOps Brings El Nino-Scale Impacts to Software Testing

DevOps Brings El Nino-Scale Impacts to Software Testing Reading Time: 4 minutes

Slams! Overwhelms! Wreaks Havoc!

That’s the sort of language used in today’s headlines describing El Nino’s first advance on California—and it’s just as applicable to DevOps’ impact on software testing.

With DevOps, the constant deluge of new functionality undeniably creates a torrential disruption to traditional testing. Many teams are barely keeping their heads above water trying to ensure that each new requirement actually works before it’s deployed. So how can you ensure every “little change” doesn’t introduce side effects that ripple across the application and make the end user more frustrated than a driver on a flooded California freeway?

How is software testing impacted by DevOps and Agile?

Traditionally, testing has been a time-boxed event. You’d wait for development to produce a viable build, then there was a time for QA to exercise what they needed to test. When they felt they were “done testing” or ran out of time, testing stopped.

DevOps Brings El Nino-Scale Impacts to Software Testing

With the rise of Agile and DevOps, two main things happened:

  • The (already-shrinking) late-cycle time dedicated to exercising the application disappeared completely.
  • The prevailing methods of testing (manual testing and GUI testing) became obsolete because they were too slow, time-consuming, expensive, and fragile for a world of short iterations and constant change.

To release with confidence despite the speed and frequency of today’s release cycles, we need to stop asking “Are we done testing” and shift the focus to “Does the release candidate have an acceptable level of business risk?” If we can answer this question, we can determine if the application is truly ready to progress through the delivery pipeline.

DevOps Brings El Nino-Scale Impacts to Software Testing

Given the rising cost and impact of software failures, organizations can no longer afford to unleash a release that could disrupt the existing user experience or introduce new features that expose the organization to new security, reliability, or compliance risks. To prevent this, the organization needs to extend from validating bottom-up requirements to assessing the system requirements associated with overarching business goals—e.g., via Continuous Testing.

What’s involved in moving from automated testing to Continuous Testing?

Automating testing is a critical step in moving towards Continuous Testing. However, if one of your automated tests fails, do you know what it really means? Does it indicate a critical business risk, or just a violation of some naming standard that nobody is really committed to following anyway? And what happens when it fails? Is there a clear workflow for prioritizing defects vs. business risks and addressing the most critical ones first? And for each defect that warrants fixing, is there a process for exposing all similar defects that might already have been introduced, as well as preventing this same problem from recurring in the future?  This is where the difference between automated and continuous becomes evident.

To evolve from automated testing to continuous testing, you need the following:

  1. Clearly defined business expectations, with business risks identified per application, team, and release.
  2. Defects automatically prioritized versus the business drivers and knowing how to mitigate those risks before the release candidate goes live.
  3. Testing in complete test environments continuously using simulation—this is critical for protecting the current user experience from the impact of change.
  4. Feedback loop for defect prevention—looking for patterns that emerge and using this as an opportunity to design and implement defect prevention practices that prevent similar defects from being introduced.

Does manual testing have a place in DevOps?

Simply put, you need to automate as much as possible. Automation has to become the norm for modern testing. That being said, some things cannot be automated, so some degree of manual testing may inevitable. To ensure that manual testing does not become a bottleneck in your delivery pipeline, you need to ensure that it’s off your critical path—do as much of it as your process requires and your resources permit, but don’t make it a gate in your automated delivery process.

How does the concept of “QA” need to evolve for DevOps?

Even though the term “QA” is derived from “quality assurance,” the QA role on software development teams has been more or less focused on tactical testing.  For the more modern collaborative process initiatives (DevOps, lean, agile…) to take hold, the role of QA must shift back to quality assurance.  In this case, QA is responsible for defining and enabling a continuous, proactive process that identifies and prevents business risks throughout the software lifecycle.

If QA=Quality Assurance, then QA being focused on creating and managing functional test scripts seems strange; this task is neither preventative nor process oriented. The concept of quality and how it is defined is an organizational and business responsibility that should be reflected in the company’s culture.  Testing is just one of many activities that ensure the organizational quality targets are being achieved.

How do simulation technologies fit into DevOps and Continuous Testing?

After organizations start accelerating their software delivery pipeline for Agile and DevOps, they often reach the point where they need to test, but can’t exercise the AUT because a complete test environment is not yet ready. A lot of teams use simulation technologies such as EaaS and service virtualization to get around these roadblocks.

To truly protect the end user experience, we need to aggressively test and defend the end user’s experience across key end-to-end transactions. With today’s systems, those transactions pass through a high number of different components, so it’s very difficult to accommodate that in a single staged test environment—cloud or not. Simulation helps us get around this.  For the most realistic simulated environment, we need to really understand how components are working in an operational environment and transfer this to the simulation.

What’s the difference between “Environments-as-a Service” and “Service Virtualization”?

The application stacks that are under your control (cloud-ready) can be imported and imaged via an elastic EaaS in a cloud. Service Virtualization then allows you to simulate the behavior of those dependencies you cannot easily image (e.g., third-party services, SAP regions, mainframes, not-yet-implemented APIs, etc.), or those you want to stabilize for test coverage purposes.

DevOps Brings El Nino-Scale Impacts to Software Testing

The result is that IT teams or environment administrators can easily set up complete dev/test environments that testers and developers can quickly (and simultaneously) configure and provision on demand.

Written by

Parasoft

Parasoft’s industry-leading automated software testing tools support the entire software development process, from when the developer writes the first line of code all the way through unit and functional testing, to performance and security testing, leveraging simulated test environments along the way.

将最新的软件测试新闻和资源发送到您的电子邮箱。