Gawantka, FalkoFalkoGawantkaSchulz, AndreasAndreasSchulzLässig, JörgJörgLässigJust, FranzFranzJust2023-05-032023-05-032022https://publica.fraunhofer.de/handle/publica/44131510.1109/iccicc57084.2022.101016572-s2.0-85158843752Explainability of artificial intelligence is an essential factor, which is required especially in critical domains e.g. legal requirements. The following research project focuses on an HR process, in which the quality and stability of several local XAI algorithms was tested. By increasing the number of repetitions of the algorithms, the results revealed that a more stable output with respect to the length of the confidence interval from feature importances could be achieved. In terms of the feature importance, by comparing the different algorithms, features importances could be strengthened/weakened and thus the respective relevances could be obtained.enSkillDB - An Evaluation on the stability of XAI algorithms for a HR decision support system and the legal contextconference paper