Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
A couple weeks ago, we released a new capability in Parasoft SOAtest called the Smart API Test Generator. I was geeked. This technology is legitimately groundbreaking -- it uses artificial intelligence to convert manual UI tests into automated API tests, so you don't need expertise in API testing or even the ability to write any code at all to get started. It's all script-less, and it's activated through a simple plugin for Chrome, so you don't have to install a large tool set in order to use it.
But at the STAREAST testing conference back in May, where I gave a long talk about how awesome this technology is, I kept encountering people ask me how this was different from record and replay technologies, that already exist on the market.
The answer is artificial intelligence, and machine learning... but why? AI for AI's sake is meaningless -- why do we need to add artificial intelligence to API testing? Well, we need it because record and replay testing just isn't enough. I'll get into this more in a bit.
To really scale API testing adoption and tackle the problems that testing teams are having keeping pace with development, you need more! Instead of just collecting traffic, recording it, and playing it back, we wanted to be able to automatically help users identify and organize captured API activity into meaningful, reusable, and extensible tests. We needed to lower the API testing adoption bar and get more testers involved.
But first, let me explain why this is so important.
Historically, organizations have relied on UI testing as the primary testing practice because it is easy and intuitive to define and execute, and easy to automate, at least initially. There is a low barrier to entry and it can scale across a large team of testers.
But the challenge with this exclusive reliance on manual and UI testing is the hidden costs. Anyone who has worked with Selenium knows that things get difficult when the UI changes and you need to update your scripts. In fact, we've found that up to 80% of testing time is spent either executing manual UI tests or fixing automated UI tests that have broken as a result of application change. On top of all that, UI testing can’t be executed until the full application is available -- and if a defect is discovered, there is a high cost of rework because the application needs to be torn apart, fixed, and reassembled before testing can continue. Often, this late cycle defect detection leads to significant release delays and raises the total cost of testing.
To complement and reduce the reliance on UI testing, organizations can leverage API testing, which solves many of these issues by providing maintainable, end-to-end scenarios that can be reused for more than just functional testing. API tests create a good communication channel between developers and testers since they help document the API's behavior in concrete, realistic terms. Shifting the diagnosis and fix of bugs and security vulnerabilities found by API testing to earlier in the lifecycle has a big pay-off in reaching schedule and quality goals.
Organizations, however, have struggled to adopt API testing methods because even awesome API testing tools just haven't historically provided enough help. In order to use API testing tools effectively, testers have needed intimate knowledge of the APIs they are trying to test, including how the APIs are used by the application in question, which requires specialized skills and expertise. And developers don't have the time to test them, so this extremely beneficial practice becomes avoided -- untenable for testers and undesirable for developers.
To solve this challenge, functional test automation companies many years ago came up with the idea of recording API activities, and creating API tests from traffic. This was powerful because by simply recording the transactions between the application and backend system, you could capture the activities of the APIs, including how the API calls restructured the data that was being passed.
With this technology, you were able to record the scenarios that were taking place in the backend systems. This helped non-technical users be able to understand which APIs were called and get a basic understanding of the data being used as each one was called; however, simple traffic collecting didn’t help them skill up, or learn how to maintain or scale their tests. It couldn’t teach them the technical skills required to build different tests with all the different message formats and protocols utilized by APIs, and it didn’t provide enough help on its own to allow a non-technical user to approach the practice. It’s a long road between a traffic recording and a fully functioning API test scenario.
And that's where we started thinking about the next step in lowering the barriers to adopting API testing. We got thinking. Simply recording network traffic between the tester’s UI and target application isn’t sufficient to help automate API testing to the point where its usefulness is realized. It’s perhaps analogous to an MP3 audio recording. You can play it back to hear the song, but it doesn't contain any information about how the song is created or what instruments were used. The song can’t be modified or extended.
Consider the following issues with simple record and replay testing:
UIs are in constant flux during development, and maintaining UI-based test automation is time consuming. UIs only expose a certain, possibly limited, representation of the underlying business logic of the application, and relying on record and replay is both limiting and susceptible to breakage from frequent changes.
Application testing at the system level from the UI is going to create lots of network traffic. It's difficult, even for the trained eye, to decipher which traffic is part of an actual test scenario happening at the UI level. Relying on human interpretation of network traffic is both time consuming and error prone. Moreover, it's typically not a skill testers have, so they have to rely on developers to help.
Creating test scenarios from basic traffic recordings is difficult. If multiple tests are needed to create a scenario, this difficulty multiplies. Replaying a traffic recording in place of a scenario is often difficult because it relies on exact preconditions for the original test. Moreover, it can be impossible to replay the same test in repetition, for example, which is important for creating performance or security-related tests.
A traffic recording is simply a sum of all the network activity during a test session. There’s no inherent understanding of the underlying message passing nor relationship to API services. Without this, it’s impossible to extend these recordings for other purposes, or even make changes to adapt to new requirements. They’re often frozen in time and only useful for the period they were recorded.
This is where artificial intelligence comes into play, so that the traffic recording can not only take place, but be extended into real, actionable value for its users. This is why we developed the Smart API Test Generator, so we could create a place for novice API testers to get started API testing without writing a single line of code. So users could quickly get started building full, meaningful test scenarios, and even extend those API tests into security and performance tests, leveraging the simple, intuitive interface of Parasoft SOAtest.
As you are testing your UI, the Smart API Test Generator monitors the underlying API calls that are made to your application, just like a traffic collector might, and then uses artificial intelligence to discover patterns and understand relationships between those API calls. It can then generate automated API test scenarios that perform the same actions as your UI tests, but are fully automated and easily extendable.
Essentially, this:
But why does that matter? Here are some of the benefits this method provides:
To summarize, the tool both automatically creates tests based on a meaningful interpretation of the captured API activity, and supports the easy extension and maintenance of these tests so their value is multiplied throughout the software lifecycle.
All of this is good in its own right. But the part that I get even more excited about is the part where the Smart API Test Generator helps users understand the relationships between the UI actions and the API calls, making it easier for testers to “skill up” and adopt a comprehensive API testing practice. Since API testing can be fully automated and easily scales, teams can lower the total cost of quality while avoiding delayed releases.
Let's break that down a little. Because the Smart API Test Generator takes on the heavy lifting, giving testers an easy, scriptless place to start building API tests, it lowers the technical entry point to API testing, bringing beginners into the API testing world and into the user-friendly Parasoft SOAtest ecosystem, where users benefit from powerful visual tools that are easy to adopt and use.
Oh, the implications! Traffic collection of API activity during system and UI testing is insufficient for automating API testing, but that's all the industry has had until now. The dependency on preconditions make these recordings less reusable and almost impossible to extend for other purposes. Not to mention the difficulty in creating meaningful tests scenarios from complex traffic, something most testers are not skilled at.
But that doesn't matter anymore! Now that we have the Parasoft SOAtest Smart API Test Generator, users can leverage artificial intelligence for the heavy lifting. Beginning API testers can use it to get started and learn how API testing works, and experienced API testers can leverage it to be wildly more efficient (that's one of the main ways we use it now, here at Parasoft). And at the end of the day, organizations can benefit from saving time and money by building meaningful, extensible, and reusable tests by leveraging a machine. It is 2018, right?
A Product Manager at Parasoft, Chris strategizes product development of Parasoft’s functional testing solutions. His expertise in SDLC acceleration through automation has taken him to major enterprise deployments, such as Capital One and CareFirst.