Options
2001
Journal Article
Title
A prediction system for evolutionary testability applied to dynamic execution time analysis
Abstract
Evolutionary testing (ET) is a test case generation technique based upon the application of an evolutionary algorithm. It can be applied to timing analysis of real-time systems. In this instance, timing analysis is equivalent to testing. The test objective is to uncover temporal errors. This corresponds to the violation of the system's timing specification. Testability is the ability of the test technique to uncover faults. Evolutionary testability is the ability of an evolutionary algorithm to successfully generate test cases with the goal to uncover faults, in this instance violations of the timing specification. This process attempts to find the best- and worst-case execution time of a real-time system. Some attributes of real-time systems were found to greatly inhibit the successful generation of the best- and worst- case execution times through ET. These are small path domains, high-data dependence, large input vectors, and nesting. This work defines software metrics which aim to express the effects of these attributes on evolutionary testing. ET is applied to generate the best- and worst-case execution paths of test programs. Their extreme timing paths are determined analytically and the average success of ET to cover these paths is assessed. This empirical data is mapped against the software metrics to derive a prediction system for evolutionary testability. The measurement and prediction system developed from the experiments is able to forecast evolutionary testability with almost 90 % accuracy. The prediction system will be used to assess whether the application of evolutionary testing to a real-time system will be sufficient for successful dynamic timing analysis, or whether additional testing strategies are needed.