• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Scopus
  4. Critical assessment of transformer-based AI models for German clinical notes
 
  • Details
  • Full
Options
2022
Journal Article
Title

Critical assessment of transformer-based AI models for German clinical notes

Abstract
Objective: Healthcare data such as clinical notes are primarily recorded in an unstructured manner. If adequately translated into structured data, they can be utilized for health economics and set the groundwork for better individualized patient care. To structure clinical notes, deep-learning methods, particularly transformer-based models like Bidirectional Encoder Representations from Transformers (BERT), have recently received much attention. Currently, biomedical applications are primarily focused on the English language. While general-purpose German-language models such as GermanBERT and GottBERT have been published, adaptations for biomedical data are unavailable. This study evaluated the suitability of existing and novel transformer-based models for the German biomedical and clinical domain. Materials and Methods: We used 8 transformer-based models and pre-trained 3 new models on a newly generated biomedical corpus, and systematically compared them with each other. We annotated a new dataset of clinical notes and used it with 4 other corpora (BRONCO150, CLEF eHealth 2019 Task 1, GGPONC, and JSynCC) to perform named entity recognition (NER) and document classification tasks. Results: General-purpose language models can be used effectively for biomedical and clinical natural language processing (NLP) tasks, still, our newly trained BioGottBERT model outperformed GottBERT on both clinical NER tasks. However, training new biomedical models from scratch proved ineffective. Discussion: The domain-adaptation strategy's potential is currently limited due to a lack of pre-training data. Since general-purpose language models are only marginally inferior to domain-specific models, both options are suitable for developing German-language biomedical applications. Conclusion: General-purpose language models perform remarkably well on biomedical and clinical NLP tasks. If larger corpora become available in the future, domain-adapting these models may improve performances.
Author(s)
Lentzen, Manuel
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
Madan, Sumit  
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
Lage-Rupprecht, Vanessa
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
Kühnel, Lisa
Fluck, Juliane
Jacobs, Marc  
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
Mittermaier, Mirja
Witzenrath, Martin
Brunecker, Peter
Hofmann-Apitius, Martin  
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
Weber, Joachim
Fröhlich, Holger  
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
Journal
JAMIA open  
Open Access
DOI
10.1093/jamiaopen/ooac087
Language
English
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
Keyword(s)
  • clinical concept extraction

  • natural language processing

  • transformer-based models

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024