Now showing 1 - 2 of 2
  • Publication
    Symptom diaries as a digital tool to detect SARS-CoV-2 infections and differentiate between prevalent variants
    ( 2022-11-14)
    Grüne, Barbara
    ;
    ; ;
    Wolff, Anna
    ;
    Buess, Michael
    ;
    Kossow, Annelene
    ;
    Küfer-Weiß, Annika
    ;
    ;
    Neuhann, Florian
    The COVID-19 pandemic and the high numbers of infected individuals pose major challenges for public health departments. To overcome these challenges, the health department in Cologne has developed a software called DiKoMa. This software offers the possibility to track contact and index persons, but also provides a digital symptom diary. In this work, the question of whether these can also be used for diagnostic purposes will be investigated. Machine learning makes it possible to identify infections based on early symptom profiles and to distinguish between the predominant dominant variants. Focusing on the occurrence of the symptoms in the first week, a decision tree is trained for the differentiation between contact and index persons and the prevailing dominant variants (Wildtype, Alpha, Delta, and Omicron). The model is evaluated, using sex- and age-stratified cross-validation and validated by symptom profiles of the first 6 days. The variants achieve an AUC-ROC from 0.89 for Omicron and 0.6 for Alpha. No significant differences are observed for the results of the validation set (Alpha 0.63 and Omicron 0.87). The evaluation of symptom combinations using artificial intelligence can determine the individual risk for the presence of a COVID-19 infection, allows assignment to virus variants, and can contribute to the management of epidemics and pandemics on a national and international level. It can help to reduce the number of specific tests in times of low labor capacity and could help to early identify new virus variants.
  • Publication
    Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
    ( 2021) ;
    Abrecht, Stephanie
    ;
    ;
    Bär, Andreas
    ;
    Brockherde, Felix
    ;
    Feifel, Patrick
    ;
    Fingscheidt, Tim
    ;
    ;
    Ghobadi, Seyed Eghbal
    ;
    Hammam, Ahmed
    ;
    Haselhoff, Anselm
    ;
    Hauser, Felix
    ;
    Heinzemann, Christian
    ;
    Hoffmann, Marco
    ;
    Kapoor, Nikhil
    ;
    Kappel, Falk
    ;
    Klingner, Marvin
    ;
    Kronenberger, Jan
    ;
    Küppers, Fabian
    ;
    Löhdefink, Jonas
    ;
    Mlynarski, Michael
    ;
    ;
    Mualla, Firas
    ;
    Pavlitskaya, Svetlana
    ;
    ;
    Pohl, Alexander
    ;
    Ravi-Kumar, Varun
    ;
    ;
    Rottmann, Matthias
    ;
    ;
    Sämann, Timo
    ;
    Schneider, Jan David
    ;
    ;
    Schwalbe, Gesina
    ;
    Sicking, Joachim
    ;
    Srivastava, Toshika
    ;
    Varghese, Serin
    ;
    Weber, Michael
    ;
    Wirkert, Sebastian
    ;
    ;
    Woehrle, Matthias
    The use of deep neural networks (DNNs) in safety-critical applications like mobile health and autonomous driving is challenging due to numerous model-inherent shortcomings. These shortcomings are diverse and range from a lack of generalization over insufficient interpretability to problems with malicious inputs. Cyber-physical systems employing DNNs are therefore likely to suffer from safety concerns. In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged. This work provides a structured and broad overview of them. We first identify categories of insufficiencies to then describe research activities aiming at their detection, quantification, or mitigation. Our paper addresses both machine learning experts and safety engineers: The former ones might profit from the broad range of machine learning (ML) topics covered and discussions on limitations of recent methods. The latter ones might gain insights into the specifics of modern ML methods. We moreover hope that our contribution fuels discussions on desiderata for ML systems and strategies on how to propel existing approaches accordingly.