Options
2023
Journal Article
Title
Evaluation of the EsteR Toolkit for COVID-19 Decision Support: Sensitivity Analysis and Usability Study
Abstract
Background: During the COVID-19 pandemic, local health authorities were responsible for managing and reporting current cases in Germany. Since March 2020, employees had to contain the spread of COVID-19 by monitoring and contacting infected persons as well as tracing their contacts. In the EsteR project, we implemented existing and newly developed statistical models as decision support tools to assist in the work of the local health authorities.
Objective: The main goal of this study was to validate the EsteR toolkit in two complementary ways: first, investigating the stability of the answers provided by our statistical tools regarding model parameters in the back end and, second, evaluating the usability and applicability of our web application in the front end by test users.
Methods: For model stability assessment, a sensitivity analysis was carried out for all 5 developed statistical models. The default parameters of our models as well as the test ranges of the model parameters were based on a previous literature review on COVID-19 properties. The obtained answers resulting from different parameters were compared using dissimilarity metrics and visualized using contour plots. In addition, the parameter ranges of general model stability were identified. For the usability evaluation of the web application, cognitive walk-throughs and focus group interviews were conducted with 6 containment scouts located at 2 different local health authorities. They were first asked to complete small tasks with the tools and then express their general impressions of the web application.
Results: The simulation results showed that some statistical models were more sensitive to changes in their parameters than others. For each of the single-person use cases, we determined an area where the respective model could be rated as stable. In contrast, the results of the group use cases highly depended on the user inputs, and thus, no area of parameters with general model stability could be identified. We have also provided a detailed simulation report of the sensitivity analysis. In the user evaluation, the cognitive walk-throughs and focus group interviews revealed that the user interface needed to be simplified and more information was necessary as guidance. In general, the testers rated the web application as helpful, especially for new employees.
Conclusions: This evaluation study allowed us to refine the EsteR toolkit. Using the sensitivity analysis, we identified suitable model parameters and analyzed how stable the statistical models were in terms of changes in their parameters. Furthermore, the front end of the web application was improved with the results of the conducted cognitive walk-throughs and focus group interviews regarding its user-friendliness.
Objective: The main goal of this study was to validate the EsteR toolkit in two complementary ways: first, investigating the stability of the answers provided by our statistical tools regarding model parameters in the back end and, second, evaluating the usability and applicability of our web application in the front end by test users.
Methods: For model stability assessment, a sensitivity analysis was carried out for all 5 developed statistical models. The default parameters of our models as well as the test ranges of the model parameters were based on a previous literature review on COVID-19 properties. The obtained answers resulting from different parameters were compared using dissimilarity metrics and visualized using contour plots. In addition, the parameter ranges of general model stability were identified. For the usability evaluation of the web application, cognitive walk-throughs and focus group interviews were conducted with 6 containment scouts located at 2 different local health authorities. They were first asked to complete small tasks with the tools and then express their general impressions of the web application.
Results: The simulation results showed that some statistical models were more sensitive to changes in their parameters than others. For each of the single-person use cases, we determined an area where the respective model could be rated as stable. In contrast, the results of the group use cases highly depended on the user inputs, and thus, no area of parameters with general model stability could be identified. We have also provided a detailed simulation report of the sensitivity analysis. In the user evaluation, the cognitive walk-throughs and focus group interviews revealed that the user interface needed to be simplified and more information was necessary as guidance. In general, the testers rated the web application as helpful, especially for new employees.
Conclusions: This evaluation study allowed us to refine the EsteR toolkit. Using the sensitivity analysis, we identified suitable model parameters and analyzed how stable the statistical models were in terms of changes in their parameters. Furthermore, the front end of the web application was improved with the results of the conducted cognitive walk-throughs and focus group interviews regarding its user-friendliness.
Author(s)