Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

What do we know about perspective-based reading?

An approach for quantitative aggregation in software engineering
 
: Ciolkowski, M.

IEEE Computer Society; Association for Computing Machinery -ACM-:
3rd International Symposium on Empirical Software Engineering and Measurement, ESEM 2009. Proceedings : 15-16 Oct. 2009, Lake Buena Vista, Florida USA
Los Alamitos: IEEE Computer Society, 2009
ISBN: 978-1-4244-4841-8
ISBN: 978-1-4244-4842-5
pp.133-144
International Symposium on Empirical Software Engineering and Measurement (ESEM) <3, 2009, Lake Buena Vista/Fla.>
English
Conference Paper
Fraunhofer IESE ()

Abstract
One of the main challenges in empirical software engineering today lies in the aggregation of evidence. Existing summaries often use qualitative narrative approaches or ad-hoc quantitative methods, such as box plots.. With these, information important for decision makers, such as existence and magnitude of a technology's effect, is hard to obtain objectively. Meta-analysis addresses this issue by providing objective quantitative information about a set of studies; however its usefulness for software engineering studies suffers from high heterogeneity of the studies and missing information. In this paper we describe an approach for quantitative aggregation of controlled experiments that reduces these two problems. We demonstrate the approach by aggregating available experiments to investigate whether Perspective-Based reading (PBR) improves team effectiveness compared to alternative reading approaches. We then compare the results of our aggregation to previous summaries addressing PBR's team effectiveness. Although the findings are similar our approach is able to provide the required quantitative information objectively. Our aggregation showed that there is no clear positive effect of PBR: Inspection teams using PBR on requirements documents are more effective when compared to ad-hoc approaches, but are less effective when compared to checklists. In addition, we found strong indicators of researcher bias.

: http://publica.fraunhofer.de/documents/N-125118.html