Options
2018
Conference Paper
Titel
Is there really a need for using NLP to elicit requirements? A benchmarking study to assess scalability of manual analysis
Abstract
The growing interest of the requirements engineering (RE) community to elicit user requirements from large amounts of available online user feedback about software-intensive products resulted in identication of such data as a sensible source of user requirements. Some researchers proposed automation approaches for extracting the requirements from user reviews. Although there is a common assumption that manually analyzing large amounts of user reviews is challenging, no benchmarking has yet been performed that compares the manual and the automated approaches conderning their e\'0eciency. We performed an expert-based manual analysis of 4,006 sentences from typical user feedback contents and formats and measured the amount of time required for each step. Then, we conducted an automated analysis of the same dataset to identify the degree to which automation makes the analysis more scalable. We found that a manual analysis indeed does not scale well and that an automated analysis is many times faster, and scales well to increasing numbers of user reviews.