PublicaHier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.
The impact of design complexity on sofware quality - a meta analysis
|Abran, Alain (Ed.); Büren, Günter (Ed.); Dumke, Reiner (Ed.); Cuadrado-Gallego, Juan J. (Ed.); Münch, Jürgen (Ed.):|
Applied Software Measurement. Joined International Conferences on Software Measurement, IWSM/MetriKon/Mensura 2010. Proceedings : 10.-12. November 2010, Vector Consulting Services, Stuttgart, Germany
Aachen: Shaker, 2010 (Magdeburger Schriften zum Empirischen Software Engineering)
|International Workshop on Software Measurement (IWSM) <20, 2010, Stuttgart>|
Software Metrik Kongress (MetriKon) <2010, Stuttgart>
International Conference on Software Process and Product Measurement (Mensura) <2010, Stuttgart>
|Fraunhofer IESE ()|
| complexity; measurement; meta analysis|
The role of software quality is constantly increasing in industry. As a consequence, many techniques have been applied to assess, predict and improve quality. For example, in early development phases, design complexity metrics are considered useful indicators of software reliability. Although many studies investigate the relationship between complexity metrics and software quality, it is unclear what we have learned from these studies, because no systematic summary exists to date. This paper reports on a meta-analysis on the impact of design complexity on software quality. We aggregated 35 Spearman correlation coefficients from 29 primary studies using a tailored meta-analysis approach. The main goal of the meta-analysis was to investigate the impact of design metrics (CBO, DIT, NOC, WMC, RFC, LCOM) on fault proneness, and to compare them with the impact for LOC. Main results are that metrics of coupling and scale (size) are stronger correlated to fault proneness than cohesionand inheritance metrics, and that LOC is stronger correlated to fault proneness than all investigated design metrics. In addition, the meta-analysis showed a strong inconsistency between the different studies that we were not able to explain satisfactorily. The best explanation (defect collection phase) is able to account for more than 50% of observed variation in 5 out of 7 investigated metrics, but still leaves a significant amount of variation unexplained.