Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>

X
BLOG

Q&A with Max Saperstone from Coveros: Part Three – Success and Failure with Test Automation

Q&A with Max Saperstone from Coveros: Part Three – Success and Failure with Test Automation Reading Time: 5 minutes

In this part three of my conversation (read Part One and Part Two) with Max Saperstone, Director of Test Automation at Coveros, we discuss successes and failures he has experienced with test automation.

Max found a similar experience to what we see in the marketplace: poor planning and lack of buy-in at all levels is not a good environment for success. However, when the ROI of automation is well articulated, success is more likely. Let’s take a look at Max’s experiences in this area.

Test Automation: Success and Failure

Mark Lambert: Let’s wrap this discussion up with two last questions. The first question is, give me an example where you went into an organization to help them with test automation and it was a success. What was the reason it went well and something that opened your eyes in a light bulb moment?

Max Saperstone: Interesting question. One of the examples that sticks out in my mind is when I went to an organization that was doing a whole bunch of manual testing. A lot of what their testing really was entering data and doing form validation. Their challenge was spending weeks just testing their system due to complexity and different combinations of inputs needed to test the application. They even knew that they weren’t covering everything.

We sat down with them and we talked requirements and everything that they were looking for. They said, “You know what? Honestly, we don’t know.” Part of what they we were doing was manually entering zip codes and the application reported back different users. For each user that was returned, they needed to do another query in order to make sure that the information was correct.

We created and ran some scripts for them and what turned out was, I think, three quarters of a million different combinations. It took about eight hours one night in order to run all of these tests. They looked at the data, and we asked, “All right. Well, what do we do with this?” They said, “We have no idea.”

We knew that all this stuff was in their database, but there was no way for us to test it. So, someone actually sat down with this data and it probably took them over a month. They eventually came back and said, “We went through everything and analyzed all of this data. It’s not all correct.” They found 30 or 40 different discrepancies, but the fact that they never would have actually caught these before—they were essentially doing random sampling.

What we were able to do is take that dataset and—instead of scripting those outputs—we turned them into tests. It still took all night to run, but they weren’t spending weeks analyzing results with poor coverage. These new tests verified that all of the outputs were actually correct and the organization was able to continue to add new customers to their database with less work involved in testing the results.

Not only did we find bugs, this new automation made it possible for the client to actually keep track of everything. To me, that was a great success. Combining automation and smart manual effort freed up a whole bunch of time and effort. Also, another success was finding bugs that would absolutely have impact on the company’s bottom line if they ended up in the final product.

Mark Lambert: So, they got complete test coverage by leveraging an entire dataset? Rather than sampling randomly to try and find defects.

Max Saperstone: Pretty much. Again, it was complete coverage of one area of the application. But it was the use of automation that was so cool to actually see. However, this was an expensive undertaking and If a client really wants us to throw all we have at the problem, we will. In this case, it was a huge lift for our client, and in the end, it turned out to be well worth it for them, which was good.

Mark Lambert: Okay, last question. What about an example where the project just went totally wrong? What was it that caused that implementation to go wrong? What were the lessons learned?

Max Saperstone: I was working with a different company then than I am now and a client brought us in and said, “We really need you to automate our testing . We have all these manual testers that spend weeks doing their regression tests. What we really need is to automate these tests and speed up the testing time.”

This was a while ago, and naively, I said, “Sure.” I started looking at the problem, writing some tests, and talking to the manual testers to figure out what they were spending most of their time on. After a month or two, I had a decent suite of tests and handed those over to the testers.

I said, “Here you go. You no longer have to run manual tests. These automated tests will investigate some of the areas of the application for you.”

What happened is the testers weren’t running these automated tests—or they would run them, but then they’d rerun them manually. As I learned, part of the reason they weren’t running them was that the testers didn’t care to run them. They didn’t necessarily trust them. They didn’t see the value that they were getting out of automation. Also, the way that the tests were constructed, they weren’t full end-to-end tests that the testers were used to running. The tests did exercise parts of the application that they needed, but the testers still had to run through a lot of other steps in order to get the coverage needed.

In the end, they said, “Well, if I’m going to have to run these manual tests, we’re not really saving that much time.” The client just didn’t see this value that was being added to them. I think the main issue for this project was really communication. We figured out what the testers spent a lot of the time on, but we didn’t talk to them about how they tested the software and what would they like to be able to automate. We needed to ask them, “If you could do absolutely anything, what would it be?”

We focused too much on best practices. The problem was that these practices and the tests we automated didn’t fit into their overall quality workflow, which was what they really needed in order to be able to alleviate some of QA’s time.

I think we should have talked more high-level strategy and gotten a better sense of what we could have done to immediately reduce the number of manual tests. We should have asked what we could do that the testers would really get excited about trying to use? Or even, what do they think makes sense to automate and what technology were they comfortable with?

It turned out that some of the testers didn’t even want to click ‘Go’ on their machine to run an automated test. However, others were comfortable with automation and getting a report emailed to them every morning saying, “This ran, this is done.” These discussions, unfortunately, weren’t had early on.

So, we went back and reiterated this project. But there was definitely a lot of effort spent upfront that could have been saved with more discussions about this high-level testing strategy. And that goes back to that first comment we were talking about.

Mark Lambert: So, without planning, without buy-in from the teams, there’s no trust or perceived value. You’re jumping blindly into test automation didn’t really help.

Max Saperstone: Exactly. It was doing test automation for test automation’s sake versus really determining what the true value is, and how we can get the most out of that.

Mark Lambert: Great. Well, thank you very much, Max. Really appreciate the time with this. I think this was a great discussion.

Written by

Mark Lambert

VP of Products at Parasoft, Mark is responsible for ensuring that Parasoft solutions deliver real value to the organizations adopting them. Mark has been with Parasoft since 2004, working with a broad cross-section of Global 2000 customers, from specific technology implementations to broader SDLC process improvement initiatives.

将最新的软件测试新闻和资源发送到您的电子邮箱。