Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Is there really a need for using NLP to elicit requirements? A benchmarking study to assess scalability of manual analysis

 
: Groen, Eduard C.; Schowalter, Jacqueline; Kopczynska, Sylwia; Polst, Svenja; Alvani, Sadaf

:
Volltext (PDF; )

Schmid, Klaus (Ed.):
REFSQ-JP 2018. REFSQ Joint Proceedings of the Co-Located Events. Online resource : Joint Proceedings of REFSQ-2018 Workshops, Doctoral Symposium, Live Studies Track, and Poster Track; co-located with the 24th International Conference on Requirements Engineering: Foundation for Software Quality (REFSQ 2018), Utrecht, The Netherlands, March 19, 2018
Utrecht, 2018 (CEUR Workshop Proceedings Vol-2075)
http://ceur-ws.org/Vol-2075/
Paper 11, 10 S.
International Conference on Requirements Engineering - Foundation for Software Quality (REFSQ) <24, 2018, Utrecht>
Workshop on Natural Language Processing for Requirements Engineering (NPL4RE) <1, 2018, Utrecht>
Englisch
Konferenzbeitrag, Elektronische Publikation
Fraunhofer IESE ()
requirements engineering; Opti4Apps

Abstract
The growing interest of the requirements engineering (RE) community to elicit user requirements from large amounts of available online user feedback about software-intensive products resulted in identication of such data as a sensible source of user requirements. Some researchers proposed automation approaches for extracting the requirements from user reviews. Although there is a common assumption that manually analyzing large amounts of user reviews is challenging, no benchmarking has yet been performed that compares the manual and the automated approaches conderning their e\'0eciency. We performed an expert-based manual analysis of 4,006 sentences from typical user feedback contents and formats and measured the amount of time required for each step. Then, we conducted an automated analysis of the same dataset to identify the degree to which automation makes the analysis more scalable. We found that a manual analysis indeed does not scale well and that an automated analysis is many times faster, and scales well to increasing numbers of user reviews.

: http://publica.fraunhofer.de/dokumente/N-497585.html