Options
2023
Conference Paper
Title
A human-centric evaluation dataset for automated early wildfire detection from a causal perspective
Abstract
Insight into performance ability is crucial for successfully implementing AI solutions in real-world applications. Unanticipated input can lead to false positives (FP) and false negatives (FN), potentially resulting in false alarms in fire detection scenarios. Literature on fire detection models shows varying levels of complexity and explicability in evaluation practices; little supplementary information on performance ability outside of accuracy scores is provided. We advocate for a standardized evaluation dataset that prioritizes the end-user perspective in assessing performance capabilities. This leads us to ask what an evaluation dataset needs to constitute to enable a non-expert to determine the adequacy of a model's performance capabilities for their specific use case. We propose using data augmentation techniques that simulate interventions to remove the connection to the original target label, providing interpretable counterfactual explanations into a model's predictions.
Author(s)