Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
As an architecture for building complex systems, microservices are gaining significant traction within the development community. While people are beginning to understand that it’s not a panacea for all application architecture woes, applications that share challenges related to dependencies and scaling can benefit greatly from it.
Microservices adoption is on the rise, but so are the struggles associated with understanding how to test microservices. Toby Clemson from ThoughtWorks has done a great job of enumerating testing strategies that you might want to employ in a microservices architecture (see his article for a great overview of the different types of tests you may want to create), but mainstream knowledge about how to build and maintain those different kinds of tests is still in its infancy.
But in many ways, testing a microservices application is no different than testing an application built using any other architecture. Microservices use well-known technologies, such as REST or queues, for which the software industry already has well-established testing tools and best practices. The unique challenge with microservices is the sheer number of services that make up an application, along with the dependencies between the services. In addition, each microservice still needs to function properly, even when other microservices that they depend on are unavailable or are responding improperly.
Microservices typically follow two patterns when interacting with each other: orchestration and reactive (choreography). Many microservices use a combined “hybrid” approach. In this post, I will provide some strategies for addressing some of the challenges that arise when creating automated tests for microservices that use these different patterns, focusing on tests for the individual microservices (as opposed to end-to-end tests of the entire application).
A microservice using orchestration will make one or more explicit calls to external services or dependencies. The calls typically use a synchronous request-response flow and will often access REST-based services. If the services need to be called in a specific order, calls to a subsequent service are not made until a response is received for a call to a prior service. Because one service explicitly calls another, they are tightly coupled.
In the example shown above, creating and executing tests for the Portfolio microservice is challenging because the Portfolio microservice has dependencies on the Accounts and Quotes microservices, which need to be deployed in the test environment along with the Portfolio microservice. The Quotes service has a dependency on a third-party service to retrieve real-time stock prices, and the data returned by that service is always changing.
Relying on third-party services or services developed by different teams greatly adds to the test environment complexity. In addition, any unexpected behaviors of the Portfolio service need to be tested, such as when the Accounts and/or Quotes services are unavailable, respond slowly, or respond with unexpected data. It is important to be able to make those services respond with different kinds of unexpected behavior to validate that the Portfolio microservice handles the error conditions properly.
You can use service virtualization to simulate the responses of the Accounts and Quotes microservices. Service virtualization enables you to define virtual versions of the Accounts and Quotes microservices and deploy them along with the actual instance of the Portfolio microservice. Virtualizing microservices is similar to virtualizing any other kind of service or application architecture. It might look like this:
Once this is done, the Portfolio microservice can be tested independently of its two dependencies.
The next challenge is to configure different environments for different cases, such as when the Accounts and Quotes services exhibit expected and unexpected behaviors. Let’s say that the team wants to test how the Portfolio service behaves when either the Accounts service or the Quotes service responds slowly or responds with error conditions. This may require running at least 5 different sets of tests, each of which has a different environment configuration taking into account slow response times, error responses, and normal and abnormal behavior of the dependent services.
For each test run, the environment needs to be put into the correct configuration before the tests for that configuration can run. In this example, we end up with at least five different test runs, each of which have their own configuration for the environment. The Environment Manager module within Parasoft’s Continuous Testing Platform can manage these different environment configurations for you:
This process isn’t specific to a microservices architecture – the same kinds of problems arise in service-oriented architectures in general, as well as in monolithic applications that may depend on only a handful of services. In a microservices architecture, however, the number of dependent services significantly increases. In a microservices environment with tens or hundreds of services, the ability to create, manage, and programmatically switch between different environment configurations for different test scenarios is very important and provides significant time and effort reduction.
As teams evolve their microservices, it is inevitable that API changes will be made to the services. A key problem that arises with API changes is how to understand the effect of those changes on consumers of the services.
When a team modifies the API for a microservice they are building, any tests that validate that microservice need to be updated based on the changes in the API. Conversely, if virtual services are used to simulate dependent microservices and an API for one of those dependent microservice changes, the virtual services for the dependent microservice must be updated to reflect the changes in the API.
Many teams use OpenAPI, RAML, or another service definition to describe the APIs for their microservices. When service definitions are used, the Change Advisor module within Parasoft SOAtest and Parasoft Virtualize can automatically detect which APIs have changed, and then automatically refactor existing functional tests or virtual services to update them with any new and/or removed fields in the API. Teams can create updated versions of their service definitions and use the Change Advisor to understand the impact of the changes on their tests and virtual services before making the changes. Once changes are made, Change Advisor makes it quick and painless to update existing assets to reflect the changes within the microservices.
One of the primary goals of a microservices architecture is to create independent components. As a result, deploying, scaling, and updating the services will be easier. This goal is not completely realized, however, when using the orchestration pattern, because individual microservices have direct dependencies on other microservices. A way to solve this is to use the choreography pattern, also known as “reactive” or “event-driven” microservices. In this pattern, microservices do not directly reference each other. Instead, they push messages onto event streams to which other microservices have subscribed.
See the following example:
In this example, let’s say the Portfolio service has been instructed to add a stock position. Rather than calling the Accounts service directly, it publishes an event to the “Position Added” event stream. The Accounts microservice has subscribed to that event stream so it gets the notification. It checks to make sure that the user has enough funds in their account. If so, it reduces the amount of funds in the users account and publishes an event to the “Account Updated” event stream. If the user does not have enough funds in their account, then it may publish an error event to a different event stream (not shown for simplicity of the example). The Portfolio microservice is subscribed to the “Account Updated” event stream, and when it sees the event posted by the Accounts microservice, it then updates its portfolio based on the confirmation from the Accounts service.
The asynchronous communication in this type of architecture introduces the benefit that the services are highly decoupled from each other – instances of each service can get replaced, redeployed, or scaled without the other microservices caring about them. The tradeoff is that the asynchronous nature of the events makes it harder to understand how the system will execute and what the flow of events will be. Depending on the order or kind of events that are produced, the system could behave in unexpected ways. This is known as emergent behavior, and is an inherent challenge in the development and testing of choreographed microservices.
There are different asynchronous messaging patterns that fall under the broader category of event-driven microservices. The asynchronous command calls pattern is used when microservices need to be orchestrated using asynchronous actions — where one microservice needs to call another microservice asynchronously, while guaranteeing that the second microservice received the message. In this pattern, messages are typically exchanged using queues.
A common framework used in microservice architectures to implement this pattern is RabbitMQ. A specific incarnation of this pattern occurs when one microservice needs to publish an event for a second microservice to process and then wait to read a “reply” event from that second microservice.
Consider the Portfolio example we just discussed, in which a REST API call tells the Portfolio microservice to add a position. The Portfolio service posts an event to the Position Added queue for the Accounts microservice to process, and then waits for the Accounts service to post a reply event to the Account Updated queue so that the REST API call can return data received from that event. There are two different ways to configure a test scenario for this example:
The first approach is simple and makes a self-contained test asset that has no additional external dependencies on test infrastructure. The second approach is reusable and is a closer simulation of the real behavior of the system. The second approach, however, has the cost of building, deploying, and managing a separate virtual asset.
A variation on the asynchronous command calls pattern is a microservice that listens on a queue for an incoming event, processes the event, and then publishes a follow-up event on a different queue for one or more other microservices to process:
In this example, the Invoice microservice is the service that needs to be tested. The Payments service publishes an event to the Payment Processed RabbitMQ queue for the Invoice service to pick up. The Invoice microservice reads the event from the queue, creates an invoice, and then publishes an event to the Invoice Created queue to direct the Email microservice to send an email to the customer with the invoice. To create a test scenario for the Invoice microservice, a test environment can be configured that contains two RabbitMQ queues and the deployed Invoice microservice. A Parasoft SOAtest test scenario can be constructed that publishes a payment-processed event to the Payment Processed queue. The scenario then subscribes to the Invoice Created queue to validate that the proper invoice created event gets published in response by the Invoice service. This is a very simple test scenario that nicely tests the Invoice service in isolation.
The event firehose pattern is used when different sources produce a very high number of events that need to be delivered quickly to different consumers over a common hub. In this pattern, messages are exchanged via topics (in contrast to the asynchronous command calls pattern where the messages are exchanged via queues). A common framework that is used to implement the event firehose pattern is the Apache Kafka framework, and it looks a little something like this:
Let’s say that we want to test a single microservice that subscribes to a Kafka topic, processes the events it receives, and then publishes its results to a second Kafka topic. For example, something like this:
In this example, we have a Forecast microservice that subscribes to a Weather Data topic that collects lots of different weather data from many different sources. It then processes that data to update its forecast model and publishes forecast data to the Forecast Data topic. In this case, we need to validate that the Forecast service publishes the expected events to the Forecast Data topic for a specific set of Weather Data events.
This would be done by configuring a test environment that has the two Kafka topics and the deployed Forecast service. The test scenario would first publish the necessary events to the Weather Data topic and then subscribe to the Forecast Data topic to verify that the correct forecast data events were published by the Forecast service. Multiple different test scenarios would need to be constructed to verify the different types and order of events that could be expected to be handled by the Forecast microservice.
This is a relatively simple test scenario. The fact that the Forecast microservice is decoupled from the other microservices has the fortunate side effect that the test for the Forecast service is also decoupled from the microservices. In this case, you don’t need to set up a complex environment with virtual services – you can simply just create test scenarios that publish events and verify that the correct events are created in response.
Many microservice teams have adopted a Continuous Integration/Continuous Deployment (CI/CD) process for building, testing, and deploying containerized microservices to automate the process and decrease the risks associated with deploying updates.
In this process, a container image that contains the microservice is automatically created and deployed into a test environment (often managed by Kubernetes or a Kubernetes-based distribution like OpenShift), where the microservice can be validated before it is pushed into end-to-end tests and finally into production. I would recommended reading CI/CD for Containerized Microservices and Designing microservices: Continuous integration. Both of these articles nicely describe this kind of process.
Some of the testing patterns that we have discussed rely on the use of virtual services for dependent microservices. These virtual services need to be highly componentized and easily deployable for the same reasons that the microservices they simulate are componentized. To make service virtualization work in these environments, you need to create containerized virtual services that can be easily deployed.
To create a containerized virtual service, you can take a base image that contains Parasoft Virtualize and all its dependencies, and layer it with another image that contains all virtual asset configuration for the virtual service. The new image for the virtual service can be deployed as a container into the Docker/Kubernetes environment along with the container for the microservice under test and all its (virtualized) dependencies.
As teams adopt microservices, it’s important to understand how to sufficiently test them. The messaging patterns and associated test patterns I’ve discussed here aren’t new, but the need to use these patterns has grown significantly as microservices become more common and more applications adopt a microservices paradigm.
To create and deploy test scenarios for your microservices with maximum flexibility, you can leverage Parasoft SOAtest, Parasoft Virtualize, and the Parasoft Continuous Testing Platform to ensure the highest quality and reliability of your microservices.
Nathan is Director of Development at Parasoft. He and his teams develop product capabilities in the areas of UI testing (Selenic), API testing (SOAtest), service virtualization (Virtualize), and unit testing (Jtest). He has been with Parasoft since 2000.