• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Artikel
  4. Rationalism in the face of GPT hypes: Benchmarking the output of large language models against human expert-curated biomedical knowledge graphs
 
  • Details
  • Full
Options
2024
Journal Article
Title

Rationalism in the face of GPT hypes: Benchmarking the output of large language models against human expert-curated biomedical knowledge graphs

Abstract
Biomedical knowledge graphs (KGs) hold valuable information regarding biomedical entities such as genes, diseases, biological processes, and drugs. KGs have been successfully employed in challenging biomedical areas such as the identification of pathophysiology mechanisms or drug repurposing. The creation of high-quality KGs typically requires labor-intensive multi-database integration or substantial human expert curation, both of which take time and contribute to the workload of data processing and annotation. Therefore, the use of automatic systems for KG building and maintenance is a prerequisite for the wide uptake and utilization of KGs. Technologies supporting the automated generation and updating of KGs typically make use of Natural Language Processing (NLP), which is optimized for extracting implicit triples described in relevant biomedical text sources. At the core of this challenge is how to improve the accuracy and coverage of the information extraction module by utilizing different models and tools. The emergence of pre-trained large language models (LLMs), such as ChatGPT which has grown in popularity dramatically, has revolutionized the field of NLP, making them a potential candidate to be used in text-based graph creation as well. So far, no previous work has investigated the power of LLMs on the generation of cause-and-effect networks and KGs encoded in Biological Expression Language (BEL). In this paper, we present initial studies towards one-shot BEL relation extraction using two different versions of the Generative Pre-trained Transformer (GPT) models and evaluate its performance by comparing the extracted results to a highly accurate, manually curated BEL KG curated by domain experts.
Author(s)
Babaiha, Negin
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
Guru Rao, Sathvik
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
Klein, Jürgen
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
Schultz, Bruce  
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
Jacobs, Marc  
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
Hofmann-Apitius, Martin  
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
Journal
Artificial Intelligence in the Life Sciences  
Open Access
DOI
10.1016/j.ailsci.2024.100095
Language
English
Fraunhofer-Institut für Algorithmen und Wissenschaftliches Rechnen SCAI  
  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024