Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
The best way to bring service virtualization into your organization is step by step, utilized where it’s most valuable, to lower the total cost of testing and gain the power to truly control your test automation process with a fully automated DevOps workflow.
When you decide it’s time to, say, lose some weight, you might do some research and end up with advice like, “Stop drinking alcohol! Start eating kale! Go to bed at 8 o’clock! Walk 5 miles every day!” And while it might make sense to embrace all of these activities to adopt a healthy lifestyle, if you try to adopt them all at once, you’ll probably fail. Instead, you need to go step by step: add an extra exercise here, make a healthy choice for food there… slowly get yourself up to a level where you can really diet like a pro.
Service virtualization is no different. I’ve helped numerous customers over the years adopt this valuable DevOps enabler, and I find that most organizations want to do the big bang approach—immediately bringing in a fully-deployed solution that spans across multiple teams and is integrated as a part of their continuous delivery pipeline. And while, yes, all of those things are essential to fully realizing the potential ROI that service virtualization can give you, if you try to do all of those things on day one, you’re probably not going to be able to effectively scale into your full DevOps deployment. So how do you get there?
In this blog, I’ll share just that. We’re going to follow one individual, from her single free service virtualization license all the way to her organization’s full deployment of service virtualization, integrated into their DevOps workflow. This is based on a true story, but for sake of anonymity, we’re going to call this individual Sally.
Meet Sally the Developer. Sally was smart and able to develop at a much faster rate than her colleagues. She had started using mocks to isolate herself during the testing phase, but she was spending a lot of time developing the type of logic that she needed to build into those systems because the actual applications she was stubbing out were somewhat complex.
So she learned how service virtualization could be used to create a more complicated virtual service in a very short amount of time. She downloaded the free version of Parasoft Virtualize to get service virtualization for free, which enabled her to start creating virtualized services, and easily modifying them as the actual services were undergoing change. As a result, she could do all of her testing and development in a completely isolated environment.
As she was discussing these advantages with some of her coworkers, they too wanted to leverage the services she had created because they were common services between the different developers and they could simply point their applications at Sally’s machine and reap the benefits.
So they too got free service virtualization with Parasoft Virtualize, and started making new services, adjusting those services, and consuming those services all from their free desktops. The team made significant progress for development and testing because they were able to reduce a lot of the bottlenecks that had been present in the environment. The team became popular for their agility and got all of the best projects.
One day, Sally’s team was approached by management, who were curious about the service virtualization solution that the team was using that helped them build and test the applications more rapidly. They wanted to have a discussion around its practical application in the larger environment. There had been some buzz around outages in the integration and production environments caused by legacy applications. The applications relied on a series of Oracle databases as well as a complex ESB and a mainframe.
Those systems were difficult to test against for a series of reasons, and Sally and her team were able to show that it was easy to simulate the services behind the ESB because they were basic REST and SOAP services, and a couple of JMS and MQ with homegrown XML. To tackle the legacy hardware, they needed to supercharge their service virtualization desktop, so they upgraded to the full version of Parasoft Virtualize.
At this point, they were able to easily apply service virtualization for the different challenges present in the use cases described by management. It took a few days to make sure that the virtual services satisfied all the different use cases, but they were able to unblock all of the challenges that they had in those environments. This was one of the key turning points for the service virtualization movement in Sally’s organization because they were able to leverage Sally’s team’s expertise with basic service virtualization to tackle more complicated challenges that actually had real costs associated with them.
The management team then took the next valuable step for their organization, creating a dedicated service virtualization center of excellence within the organization that could be leveraged to build virtual services whenever new challenges would arise. Sally, of course, was the natural fit to lead the team.
Sally started building processes around onboarding virtualization initiatives and creating acceptance criteria, so the team itself didn’t become a new bottleneck. Governance became an important part of the conversation. The team set up a series of roles and responsibilities to ensure each virtualization project was successful. There were 5 roles:
Setting up these roles was critical to the success of the service virtualization team, clarifying needs to make the virtualization projects successful. Each member of the service virtualization team had a Parasoft Virtualize desktop software. They would create the virtual services onto the desktop and then make them available to users.
As the team became more popular, it became clear that they needed to scale their deployment. If one of the team members had to shut down the machine or go on vacation, it would affect users hitting the virtual services. So Sally decided it was time to upgrade their deployment architecture once again, and they procured a virtualization staging server.
This allowed each member of the team to join forces and share their virtual assets. The server was “always on” and acted as a virtual artifact library. The server was connected to source control, so as different versions of the services were deployed to the server, they would automatically be checked in. This allowed the team to have a central point of truth for all virtual assets, and no one had to guess where the most recent version was.
The team happily hummed along for several months solving big and meaningful challenges for the organization. It had grown in size and added a few more members. In order to boost the team’s visibility and awareness (and also increase the size of their budget), Sally had implemented the “Hoo-Rah” program. Every time the team built something with quantifiable ROI, they kept track of the gains and sent out a public email explaining what they had done and which teams had benefited. Examples of these “hoo-rah”s were:
These “hoo-rah” emails were vital for bringing additional teams into the fold, but also helped the key stakeholders of the business understand the importance of service virtualization to the test automation process.
Then, one evening in late summer, a member of the security team was doing an audit of a critical application and discovered a potential attack vector into the system that could be exploited and cause not only sensitive customer data to be leaked but also force the organization out of compliance. If not remediated quickly, the organization would be forced to update the compliance committee and start the penalty process.
The team realized that if they could remediate the defect within a specific time window, they could push the changes to the production environment and all would be well. The challenge was that in order to successfully reproduce the issue, they had to put many of their third-party payment systems into a state where they would provide back various error conditions and intentionally leak PII or customer data.
The team didn’t have the ability to force these systems, which were outside of their control, into the state that they needed to in order to expose the defect and validate the fixes they would put in place. Sally was called in the middle of the evening and was asked to go to work.
The team made quick work of reusing existing virtual services that they had created for these third-party payment systems, and putting them into a state where they would start returning negative behavior. Because the application didn’t have to be redeployed, they could simply modify the behavior as the developers were making their changes and flush out all of the different combinations that led to the potential exploit. Needless to say, the team was successful in delivering a hot patch into production that saved the company millions.
Sally’s service virtualization center of excellence team was now popular with developers and testers, and many of them started asking for access to Parasoft Virtualize themselves so they could make their own prototypes and validate negative and positive scenarios. Sally had the infrastructure to support it, but she didn’t necessarily need to give them the heavy hammer that was the professional desktop version, so she upgraded their infrastructure again and included Parasoft’s thin-client interface to fully enable their DevOps workflows. This centralized dashboard gave access to any user in the organization, and enabled them to create virtual services and test cases right from their browser.
This evolution of the deployment created a “hybrid model,” in which individual team members could act in a federated way, creating their own virtual services for their needs, accessing them, modifying them, etc. And when it came time to integrate those virtual services into the larger architecture, they had a mechanism to collaborate with the virtualization center of excellence. The team could add additional servers to support the load, as well as snap in performance servers when the performance team got on board.
At this point, Sally had a comprehensive library of virtual assets, as well as corresponding automated test cases. She had a library of test data feeding into both of these test artifacts. The majority of the actual service creation was taking place by the individual teams, and Sally’s team was primarily responsible for orchestrating all of those different virtual services into an “environment.” The environment was really just a template of virtual assets, test cases, and test data built into a specific configuration in order to satisfy a testing initiative. They built many of these environment templates and aligned them to the different applications in the organization.
Whenever an application needed to be tested and the real environment wouldn’t suffice, the virtualization center of excellence would spin off an environment of different virtual services and allow the team members to test. The teams became more and more reliant on virtual services as a part of their test execution and it was a natural transition into the continuous delivery pipeline.
The final and fully-realized deployment for service virtualization at Sally’s organization looked like this:
Individual team members would create the virtual services and test cases within their browser. If the virtual services needed to be updated or additional logic needed to be added, the virtualization COE would handle that with their professional desktops. The virtual services and test cases would then be combined inside of the thin client interface, and when those environments needed to be available, their build system would call and deploy them either into the cloud or onto dedicated servers. The automated test cases would then kick off, the results would be sent to their aggregated dashboard, and the dynamic environment would be destroyed.
True continuous testing enabled by service virtualization isn’t something that happens overnight. This story is real, and it’s all possible with service virtualization, but it requires the organization to be bought in and start from the ground up, just like Sally did. (By the way, she’s now on the executive board.) This is the best way to bring service virtualization into your organization — step by step, utilized where it’s most valuable. Everybody’s exact journey will be different, but the end result should be the same: lowering the total cost of testing and gaining the power to truly control your test automation process.
A Product Manager at Parasoft, Chris strategizes product development of Parasoft’s functional testing solutions. His expertise in SDLC acceleration through automation has taken him to major enterprise deployments, such as Capital One and CareFirst.