Is there a good pattern to combine unit testing and performance testing?
6 views (last 30 days)
Show older comments
I have made extensive use of the unit testing framework in the past, and have just upgraded to a version of Matlab with the performance testing framework. I am looking for advice on how to combine the two. A typical scenario is having two different implementations of the same function where I want to both
- Verify the two functions give the same result
- Time the two functions
At the moment for (1) I will typically use class based unit tests, with a single method calling both functions and verifying that the results match. It looks to me as if to time the functions using the performance testing framework I would have to put the calls to the two functions in different methods, but then I would loose the ability to verify that the functions produce the same output. Am I missing a neat way of doing both together? Would it be better to do the unit testing and performance testing separately?
Andrew
0 Comments
Answers (2)
Steven Lord
on 21 Mar 2019
Generally, a performance test class that subclasses from matlab.perftest.TestCase is also a unit test, since the performance test base class matlab.perftest.TestCase is a subclass of the unit test base class matlab.unittest.TestCase.
>> ?matlab.perftest.TestCase < ?matlab.unittest.TestCase
ans =
logical
1
[An explanation of that code: from this documentation page that talks about metaclass objects: "Less than function (ClsA < ClsB). Use to determine if ClsA is a strict subclass of ClsB (i.e., a strict subclass means ClsX < ClsX is false)."]
Because a performance test class isa unit test class, you can call all the usual MATLAB qualification API methods inside your performance test. As the "Write Performance Test" section of this documentation page shows, if you do this I would wrap the code whose performance you want to measure with startMeasuring() and stopMeasuring() so that you don't measure the performance of the qualification API. You can also run the test as you're writing it using runtests to make sure that it works then once it's finished run it with runperf (which will take longer because "The performance test framework runs the tests using a variable number of measurements") to collect the performance data.
For the specific scenario you're describing, you could write your test as an Abstract base class (subclassing from matlab.perftest.TestCase) with an Abstract property. The concrete subclasses of that Abstract base class would fill that Abstract property with a function handle to the function that subclass was written to test. [Technically you wouldn't be comparing the two functions to each other directly, but you'd validate that each function returns the common expected results codified in the Abstract base class.] Run the collection of concrete tests using runperf to create an array of MeasurementResult objects and use whatever techniques you want to compare the data in those objects' Samples properties to determine which function is faster.
Alternately, if you have a lot of functions to compare (you're grading a collection of student assignments, for example) you could write your test as a parameterized test. Attached is a parameterized version of the example from the "Write Performance Test" documentation I linked above; run it with the following and review the Samples from each element in the results array.
results = runperf('fprintfTest')
2 Comments
Steven Lord
on 22 Mar 2019
Try to memoize the reference implementation. Since it's going to be called with the same input each time, the MemoizedFunction will be called repeatedly but it'll retrieve the answer from the cache every time after the first. That should be faster.
Just be careful if random numbers or other piece of global state is involved, as stated in the second entry in the Tips section on that documentation page. You're going to want to exert some control via rng on the random number generator if you're testing a function whose output could depend upon the particular numbers that were generated. Otherwise your test could fail one run, pass the next three, and fail again. Sporadic failures are a real pain to investigate.
Andy Campbell
on 26 Dec 2019
Edited: Andy Campbell
on 26 Dec 2019
Hi Andrew,
Sorry for such a delay!
If you have 18a or later you can leverage labeled measurement boundaries to get both measurements in a single test procuedure. This would allow something like:
testCase.startMeasuring("reference");
expResult = referenceFcn();
testCase.stopMeasuring("reference");
testCase.startMeasuring("optimized")
actResult = optimizedFcn();
testCase.stopMeasuring("optimized");
testCase.verifyEqual(optimized, reference, 'RelTol', 1e-6);
Hope that helps!
0 Comments
See Also
Categories
Find more on Testing Frameworks in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!