• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. HarmLLaMA: Harmful Language Detection with Large Language Models
 
  • Details
  • Full
Options
2025
Conference Paper
Title

HarmLLaMA: Harmful Language Detection with Large Language Models

Abstract
Online platforms are complex systems that influence the commercial, social, and political environment, debating important real-life topics, e.g., health, emigration, elections, climate change, etc. These online environments offer users freedom of expression through anonymous posting. In addition to their obvious advantages, some users abuse this freedom to spread harmful content, e.g., misinformation, propaganda, harmful conspiracy theories, or abusive, aggressive, and offensive speech. Automated detection techniques can effectively reduce the negative influence of antisocial behavior used by these malicious actors. In this article, we propose HarmLLAMA, a fine-tuned LLAMA2 model using LORA. The experimental results on two real-world datasets show that our model, HarmLLaMA, outperforms current state-of-the-art models in terms of Accuracy, Precision, Recall, and F1-Score.
Author(s)
Truică, Ciprian-Octavian
Apostol, Elena-Simona
Ilie, Alexandru-Gabriel
Paschke, Adrian  
Freie Universität Berlin  
Mainwork
IEEE 21st International Conference on Intelligent Computer Communication and Processing, ICCP 2025. Proceedings  
Conference
International Conference on Intelligent Computer Communication and Processing 2025  
DOI
10.1109/ICCP68926.2025.11427180
Language
English
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Keyword(s)
  • harmful language detection

  • offensive language detection

  • hate speech detection

  • social media analysis

  • large language models

  • fine-tuning

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024