Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
Software development teams are scrambling to keep up with today’s relentless demand for more innovative software—faster. Most are exploring new ways to accelerate release cycles (agile, lean DevOps..). Yet, based on the number of software failures now making headlines on a daily basis, it’s evident that simply speeding up existing processes isn’t doing the trick.
How can software development professionals respond to this need for speed without increasing the risk of costly defects? That was the topic of Wayne Ariola’s What Do Defects Really Cost? Much More Than You Think session at StarEast last week. Ariola revealed his research on the true cost of software defects—and why a new approach to testing/QA is required if you don’t want to be responsible for a software failure that lands your organization in the headlines.
The following synopsis of that session was written by Noel Wurst, Managing Editor at Skytap, the leading provider of on-demand Environments as a Service (EaaS). It was originally published on the Skytap Blog…
Asking a roomful of software testers, “What do defects really cost?” and then telling them, “much more than you think,” before anyone got the chance to answer is a pretty gutsy move. It’s certainly one that could’ve easily backfired for Parasoft chief strategy officer Wayne Ariola at last week’s STAREAST conference.
It did not backfire, and after waves of evidence of the immense financial impact that defects in production can have on an enterprise, I got the feeling that many in the room made mental notes to have some very serious conversations with various departments upon returning home.
Everyone knows that defects cause long hours of rework, and new feature releases get pushed back, sometimes patches have to applied – but quantifying that into actual financial blows is not only difficult, those numbers are rarely shared with devs, testers, and others outside of the investor and executive levels.
Citing familiar software failures from banks and insurance, Sony’s constant hacks, Target’s 2014 identity theft problem, and American Airlines’ recent iPad glitch, Ariola then moved to a series of hard to stomach line graphs that showed the plummeting stock prices that each of these failures caused.
Immediately following these failures, as news grew and social media shares mounted, these stocks continued to tank. In some examples, once prices began to climb again, they plateaued far lower than the price per share before the release or bug discovery.
Ariola blamed these failures on a “culture of not focusing on software quality,” and no one disagreed. This isn’t to say that those testers in the room aren’t focused on quality, but is everyone else? Not likely. And for something as difficult to change as culture, it’s not the time for testers or anyone else to point fingers. It’s time to right the ship before your organization is the next one in the headlines.
So how do we fix this?
For one thing, Ariola says it’s time to start sharing financial information like stock prices with developers—and I would add product owners, designers, and anyone else who touches a release candidate before it ships. And that doesn’t mean sending a quarterly email to the company with little more than a three-month stock history screenshot in it.
This is looking at the price of your company’s stock at the hour a new release came out, and then tracking other significant moments from there. When was the bug found? Who found it first? Was it publicized? How long did it take until it could be fixed? How quickly could support resolve issues, and satisfy customers? These are the kinds of metrics that absolutely can impact something as seemingly distant as to what’s going on on Wall Street – an area where some may not know they have so much of an impact.
And for those that don’t work for a publicly traded company, there are other metrics to go by, like the number of customers you have, or the number of people who are currently using your mobile app. Ariola asked early on, “How many of you have ever downloaded an app you hated?” Of course every hand went up, and when he then asked, “And what did you do when you realized you hated it?” we all proudly and immediately shouted in unison, “We deleted it!”
This is the mindset of today. The only thing that takes less time to download a mobile app, is deleting it. Like a stock price, if the number of subscribers/users of your software is falling more often than it’s rising, this can be a serious issue.
One suggestion that was made to combat buggy releases, was to stop asking, “Are we done testing” and instead, ask, “Does the release candidate have an acceptable level of risk?” Some may incorrectly assume that those two are similar enough to not warrant changing the approach to testing—but this only allows the current culture to continue to put your business and customers at risk.
Even if everyone did somehow manage to agree on a definition of “done”, when a disaster strikes, nobody’s going to want to hear (or even say) “But we were done testing!” when someone wants to know how that bug made it into production.
As the session continued, there were some in the room who shared stories of how they were increasing coverage with continuous testing, “shifting left”, utilizing cloud-based dev/test resources—and I hope these stories helped inspire some of those who knew the challenge ahead of them back home.
As I read back over this recap, I realize I’ve made it sound like the session was some fire and brimstone sermon meant to scare us all into fleeing back to our offices and never seeing free time or our families ever again, but that’s far from the case.
Everyone got the message without being beaten over the head, it applied to every software industry in the world, there was loads of involvement from the audience, and Ariola almost managed to go the entire session without namedropping or pitching his own company’s wares a single time—something I’ve literally never seen done during a vendor presentation.
During the Q&A portion at the end, one attendee excitedly asked, pen and paper in hand, “Do you have any service virtualization, test automation, or continuous testing tools that you would recommend?”
And after a chuckle, he had no choice but to suggest Parasoft as a great option, and the laughter and applause from the crowd proved that he’d definitely earned the plug.
Want to learn more about the cost of quality—and how to ensure that you’re not the one responsible for releasing a billion dollar bug into the wild? Read Parasoft’s 44-page Continuous Testing eBook today to learn how to get started.
From Alan Zeichick, SD Times
“Ariola and Dunlop nail the target: It’s all about risk. That’s what insurance is all about, that’s what attorneys are all about, that’s the sort of decision that every business and technology manager makes all day, every day. We have to live with risk and make tradeoffs. More testing? At some point, indeed, we have to cut it off.
It’s difficult if not impossible to assess the business risk of software quality. Yes, software quality is expensive. The higher the quality, the more time it takes to deliver software, and the greater the resources you must spend on software quality. And yes, it is expensive to have software failures—you might lose money, lose customers, suffer lawsuits, damage your brand, end up on the front page of The Wall Street Journal. Not good…
Ariola and Dunlop make a good point in their short book: We mustn’t accept that the trend toward accelerating the development process will magically improve software quality; indeed, we should expect the opposite. And if we are going to mitigate risk in today’s environment, we need to reengineer the software development process in a way that considers business risk to be one of the metrics, along with the other traditional results of our automated testing and Continuous Integration systems.”
Parasoft’s industry-leading automated software testing tools support the entire software development process, from when the developer writes the first line of code all the way through unit and functional testing, to performance and security testing, leveraging simulated test environments along the way.