Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
Want to see Parasoft in action? Sign up for our Monthly Demos! See Demos & Events >>
Unit testing is a best practice to test individual units/components of a software, but it can be tedious and costly for Java developers. It’s painstaking to test each unit for correct behavior with manual assertions, and isolate each method with mocking, and unit testing itself is open to bugs and misunderstood behavior. To improve this situation, you can use a runtime analysis tool to detect data and control flow, external dependencies, and to calculate test code coverage.
With this collected data from the runtime analysis, an enterprise-grade solution like Parasoft Jtest can prompt the developer about how to improve the tests, by automatically recommending assertions for correct behavior, and methods for mocking to improve test isolation. This integration between automatic unit test generation and runtime analysis reduces the manual intervention required for unit testing for Java.
Unit testing is a well-known practice, but its implementation requires improvement in many projects. Unit testing, done well, improves the agility of agile process, increases quality and security, and brings long-term costs savings.
Unfortunately, regardless of these benefits, developers can still struggle with unit testing, despite the desire to achieve better results. The amount of time and effort needed for test creation and maintenance can be too much to justify increasing testing efforts. Often, test suites are fragile because of poor unit/object isolation from dependencies. Proper mocking of dependencies becomes the bane of software testers, as does creating the assertions needed to determine correct program logic. Even parameterizing tests for scenarios can be tedious and time consuming.
Software development teams must address these problems with test creation, isolation, and maintenance if they want to achieve the benefits of thorough unit testing. The answer starts with test automation tools, but simply automating the execution of tests and collecting results isn’t enough. Runtime analysis, the process of observing a running executable and recording key metrics, is an innovative way to help improve unit testing creation, mocking, and test stability.
In most cases, developers don’t consider runtime analysis important in early stages of unit testing. Most tools are used for catching errors that unit testing missed, or simply in calculating code coverage. But while these benefits are important, runtime analysis can also observe the execution of the first iteration of a unit test to make recommendations to improve the test and detect changes to the test runtime environment that interfere with test stability.
Test frameworks such as JUnit create sparse code that requires further developer input. This work is tedious, so it can be automated to fill in more of the details based on the observed program logic. The following java unit test example can be automatically generated by Parasoft Jtest:
Similarly, for unit tests with parameterized inputs, shown below:
Since the created tests are executable from the start, they can be observed by runtime analysis for both results and execution flow. For example, a test may fail due to a raised exception, shown below.
In addition, runtime tools observe the execution path into dependencies and recommend potential mocks to increase the isolation of the test. Although visual inspection of an object under test will reveal its dependencies, automating the detection and mocking of these dependencies saves lots of tedious and error-prone work.
In the example below, Parasoft Jtest offers the developer a choice of what to mock based on the execution trace of the unit test:
In this case, adding a mockable method pattern adds the method to a list of mocks to be handled by a mocking framework such as PowerMock.
Mocking static constructors are also possible, as shown below.
With full knowledge of the execution flow, plus parameters used in method calls, runtime analysis can be used to provide useful recommendations to the developer to improve the test code. Although assertions are provided, statically, when a test is created, they may not be enabled or correct. At test execution, failed and missing assertions trigger warnings which then lead to recommendations to remedy the problem.
For example, after creating a new test, and no recommended assertions have been uncommented, you would see the following:
Or if an assertion fails, for example, the following is displayed:
Whatever happens, it is the constant feedback about corrective action for assertions that closes the loop on test creation to complete unit testing. Additionally, as changes the unit under test is changed, these changes can be dealt with in the same manner, continually reducing the manual test maintenance required.
Runtime analysis can also detect changes in the test environment during execution that impact the ability to recreate an identical test environment for subsequent tests. Tests that pass at one time and fail later can be a great cause of frustration and lost time and effort. Some examples of instabilities that you can detect with runtime analysis include the following:
It’s critical that each test execution has an identical starting point, to ensure reliable results. Preventing test instability with runtime detection removes guesswork from the test debug phase.
So you can see that runtime analysis isn’t just for computing code coverage. Runtime analysis during test execution is critical to improving test efficiency and effectiveness. Monitoring execution paths provides information about dependencies, to improve the handling of dependencies and mocking. Assertions can be monitored, and automatic recommendations can help improve test fidelity. Detecting changes in the runtime test environment that affects test stability remove frustration and reduce debugging cycles for test code.
Kapil is a Product Manager at Parasoft, focusing on Parasoft Jtest. Kapil held several technical positions ranging from software engineer to development lead, before moving into product management.