Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
One of the most recent buzzwords in the software development industry is artificial intelligence. But what does it really mean? How does AI in the context of software test automation differ from its broader definition?
What do we mean when we talk about AI and its sister term, machine learning? I thought I would clear up our usage so it’s clear how we are using AI and machine learning to advance state-of-the-art API testing.
Artificial intelligence is one of the most overloaded buzzwords in the digital marketplace. “AI” conjures up images of things like all-powerful super-computers, hell bent on human destruction; voice-control assistance in the way of Alexa or Siri; computer chess opponents; or self-driving cars.
Wikipedia defines AI research as “…the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.” But that’s a little too abstract. I like to think of AI as the ability of a computer program or machine to think (reason for itself) and learn (collect data and modify future behavior in a beneficial way). It’s in this definition that we start to see something more meaningful in the context of what AI means for software development tools and technology.
It’s also important to realize that the scope of what is considered “AI” changes over time. At one time, for example, people considered optical character recognition (OCR) state-of-the-art in AI. Or the question-and-answer responses from Siri and Alexa, where at one time the impact was considered state-of-the-art, are now largely taken for granted and not always considered an AI system. The same thing will happen with software testing tools — innovations in automation today will become expected as new capabilities evolve. But for now, let’s consider AI in software test automation in 2019.
The use of AI in software development is still in its infancy, and the level of autonomy is much lower than seen in more evolved areas of work such as self-driving systems or voice-assisted control, although it is still driving forward in the direction of autonomous testing. The application of AI in software testing tools is focused on making the software development lifecycle easier. Through the application of reasoning, problem solving, and, in some cases, machine learning, AI can be used to help automate and reduce the amount of mundane and tedious tasks in development and testing.
“Don’t test automation tools do this already?” you might ask.
And the answer is of course, “Yes! They do!” …but they have limitations.
Where AI shines in software development is when it is applied to remove those limitations, to enable software test automation tools to provide even more value to developers and testers. The value of AI comes from reducing the direct involvement of the developer or tester in the most mundane tasks. (Human intelligence is still very much needed in applying business logic, etc.)
For example, consider that most (if not all) test automation tools run tests for you and deliver results. Most don’t know which tests to run, so they run all of them or some predetermined set. So what if an AI-enabled bot can review the current state of test status, recent code changes, code coverage, and other metrics, decide which tests to run, and then run them? Bringing in decision-making that’s based on changing data is an example of applying AI (And Parasoft does this, by the way). The software is effectively able to replace the developer/tester in the decision-making process. The benefit of applying this to a CI/CD pipeline is clear.
So, what about machine learning? Machine learning can augment the AI by applying algorithms that allow the tool to improve automatically by collecting the copious amounts of data produced by testing.
Machine learning research is a subset of overall AI research, with a focus on decision-making management based on previously-observed data. This is an important aspect of AI overall, as intelligence requires modifying decision-making as learning improves. In software testing tools, though, machine learning isn’t always necessary — sometimes an AI-enabled tool is best manually fine-tuned to suit the organization using the tool, and then the same logic and reasoning can be applied every time, regardless of the outcome.
In other cases, data collection is key to the decision-making process, and machine learning can be extremely valuable, requiring some data initially and then improving or adapting as more data is collected. For example, code coverage, static analysis results, test results, or other software metrics, over time, can inform the AI about the state of the software project.
Machine Learning has its own subset called Deep Learning, which is built on the processing of vast amount of data to learn from. Such data, in most cases, is represented by multilayered analytic Neural Networks — they are models inspired by net of human neurons, helping computers to acquire new knowledge and to reason highly intelligently.
The key aspect of Deep Learning is huge amount of information represented by Neural Networks to drive decision-making process. Such amount of data is not always available or not applicable in software testing – maybe that’s why we don’t see many cases of Deep Learning usage in those areas yet. One possible example would be “learning” from tens of millions lines of code to understand different types of security violations, and to implement a static analysis engine based on a deep learning model.
This is an important area of research and development at Parasoft. Excitingly, our current offerings are just the beginning, and our ongoing research in AI and ML continues to bring new ways to integrate these technologies into our products. Here are a few ways we have already brought them in.
This first example is in Parasoft Jtest, our software testing solution for Java developers that includes static analysis, unit testing, coverage and traceability, etc. Applying AI here, we’ve released automatic test case generation, which helps developers fill in the gaps when starting from a sparse JUnit harness.
Parasoft Jtest’s IDE plug-in adds useful automation to the unit testing practice with easy one-click actions for creating, scaling, and maintaining unit tests. By using AI-enabled Jtest, users can achieve higher code coverage while cutting in half the time and effort required to build a comprehensive and meaningful suite of Junit test cases.
One way it does this is by making it easier to create stubs and mocks for isolating the code under test. The underlying AI enables Jtest to observe the unit under test to determine its dependencies on other classes, and when instances of these dependencies are created, it suggests mocking them to the user to create more isolated tests. Automatically creating the necessary mocks and stubs reduces the effort on one of the most time-consuming parts of test creation.
Parasoft Jtest also automatically detects code that isn’t covered by existing test suites and traverses the control path of the source code to figure out which parameters need to be passed into a method under test, and how subs/mocks need to be initialized to reach that code. By enabling this AI, Jtest can automatically generate new unit tests, applying modified parameters to increase the overall code coverage of the entire project.
Another good example, adding machine learning into the mix, is Parasoft SOAtest‘s Smart API Test Generator, that goes beyond record-and-playback testing, leveraging AI and machine learning to convert manual UI tests into complete, automated API test scenarios.
The Smart API Test Generator uses reasoning to understand the patterns and relationships in the different API calls made while exercising the UI. From that analysis, a series of API calls is constructed that represents the underlying interface calls made during the UI flow. It then applies machine learning by observing what it can about the different API resources and storing them as a template in a proprietary data structure. This internal structure is updated by examining other test cases in the user’s library to learn different types of behavior when exercising the APIs, for example, an assertion, or adding a particular header at the right spot.
The goal of AI here is to create more advanced tests (not just repeating what the user was doing, as you get with simple record-and-playback testing). The tool recognizes patterns inside the traffic, creates a comprehensive data model of observed parameters, generates automated API tests, and enables applying learned patterns to other API tests to enhance them and help users create more advanced automated test scenarios. The resulting automated API tests are more complete reusable, scalable, and resilient to change.
Automatically validating the UI layer is another critical component of your testing strategy to ensure that the product is fully verified before going into production. The Selenium framework has been widely adopted for UI testing as a clear user’s choice, but users still struggle from common Selenium testing challenges of maintainability and stability. This is where AI technologies and, particularly, machine learning, can help, providing self-healing at runtime to address the common maintainability problems associated with UI test execution.
We provide this functionality with Parasoft Selenic, which can “learn” about your internal data structure during your regular execution of Selenium tests. The Selenic engine monitors each run and captures detailed information about the web UI content of the application under test. It extracts DOM elements, their attributes, locators, etc., and correlates them with actions performed by UI-driven tests. Selenic employs Parasoft’s proprietary data modeling approach, storing that information inside its AI engine, and the model is updated continuously, analyzing historical execution of all tests to continue becoming “smarter.”
This is a critical time-saver in cases when UI elements of web pages are moved or modified significantly, causing tests to fail. With Selenic, AI heuristics used by the engine can “match” those changed elements with historical data represented by the model, and automatically generate “smart locators” that are resistant to changes, to recover execution of Selenium tests at run-time. Information about these changes is automatically propagated through the model, and future generation of new locators is adjusted based on those changes.
One of the roadblocks to successful adoption of static analysis tools is managing a large number of warnings and dealing with false positives (warnings that are not real bugs) in the results. Software teams that analyze a legacy or existing code base struggle with the initial results they get with static analysis and are turned off by this experience enough to not pursue further effort. Part of the reason for being overwhelmed is the numbers of standards, rules (checkers), recommendations, and metrics that are possible with modern static analysis tools.
Software development teams have unique quality requirements and there are no one-size-fits-all recommendations for checkers or coding standards. Each team has their own definition of false positive, often meaning “don’t care” rather than “this is technically incorrect.” Parasoft’s solution to this is to apply AI and machine learning to prioritize the findings reported by static analysis to improve the user experience and adoption of such tools.
Parasoft uses a method to quickly classify the findings in the output of a static analysis tool as either something that the team wants to see or something the team wants to suppress by reviewing a small number of findings and constructing a classifier based on the metadata associated with those findings. This classifier is based on results of previous manual classifications of static analysis findings in the context of both historical suppressions of irrelevant warnings and prior prioritization of meaningful findings to fix inside the codebase.
The end results are classified in two ways:
This greatly improves the user experience by directing developers to warnings that have the highest likelihood of applying to their project. With these innovations, organizations can immediately reduce manual effort in their adoption and use of static analysis.
So what comes next? We are in active R&D in this space, continuing to search further applications of artificial intelligence and machine learning to augment our software testing tool suite. There are many routes of research, but the end goal is clear: to help teams develop and test their code more efficiently and effectively, to create higher quality software at speed.
Parasoft's VP of Development, Igor is responsible for technical strategy, architecture, and development of Parasoft products. Igor brings over 20 years of experience in leading engineering teams, with a specialization in establishing and promoting the best agile practices in software development environments.