Options
Prof. Dr.
Wrobel, Stefan
Now showing
1 - 3 of 3
-
PublicationGuideline for Designing Trustworthy Artificial Intelligence(Fraunhofer IAIS, 2023-02)
;Cremers, Armin B. ;Houben, Sebastian ;Sicking, Joachim ;Loh, Silke ;Stolberg, EvelynTomala, Annette DariaArtificial Intelligence (AI) has made impressive progress in recent years and represents a a crucial impact on the economy and society. Prominent use cases include applications in medical diagnostics,key technology that has predictive maintenance and, in the future, autonomous driving. However, it is clear that AI and business models based on it can only reach their full potential if AI applications are developed according to high quality standards and are effectively protected against new AI risks. For instance, AI bears the risk of unfair treatment of individuals when processing personal data e.g., to support credit lending or staff recruitment decisions. Serious false predictions resulting from minor disturbances in the input data are another example - for instance, when pedestrians are not detected by an autonomous vehicle due to image noise. The emergence of these new risks is closely linked to the fact that the process for developing AI applications, particularly those based on Machine Learning (ML), strongly differs from that of conventional software. This is because the behavior of AI applications is essentially learned from large volumes of data and is not predetermined by fixed programmed rules. -
PublicationVisual Analytics for Human-Centered Machine Learning( 2022-01-25)
;Andrienko, Natalia ;Andrienko, Gennady ;Adilova, LinaraWe introduce a new research area in visual analytics (VA) aiming to bridge existing gaps between methods of interactive machine learning (ML) and eXplainable Artificial Intelligence (XAI), on one side, and human minds, on the other side. The gaps are, first, a conceptual mismatch between ML/XAI outputs and human mental models and ways of reasoning, and second, a mismatch between the information quantity and level of detail and human capabilities to perceive and understand. A grand challenge is to adapt ML and XAI to human goals, concepts, values, and ways of thinking. Complementing the current efforts in XAI towards solving this challenge, VA can contribute by exploiting the potential of visualization as an effective way of communicating information to humans and a strong trigger of human abstractive perception and thinking. We propose a cross-disciplinary research framework and formulate research directions for VA. -
PublicationEffcient Decentralized Deep Learning by Dynamic Model Averaging( 2019)
;Sicking, Joachim ;Hüger, Fabian ;Schlicht, PeterWe propose an efficient protocol for decentralized training of deep neural networks from distributed data sources. The proposed protocol allows to handle different phases of model training equally well and to quickly adapt to concept drifts. This leads to a reduction of communication by an order of magnitude compared to periodically communicating state-of-the-art approaches. Moreover, we derive a communication bound that scales well with the hardness of the serialized learning problem. The reduction in communication comes at almost no cost, as the predictive performance remains virtually unchanged. Indeed, the proposed protocol retains loss bounds of periodically averaging schemes. An extensive empirical evaluation validates major improvement of the trade-off between model performance and communication which could be beneficial for numerous decentralized learning applications, such as autonomous driving, or voice recognition and image classification on mobile phones.