Oßwald, FlorianFlorianOßwaldBartolosch, RomanRomanBartoloschFiolka, TorstenTorstenFiolkaHartmann, EngelbertEngelbertHartmannKrach, BernhardBernhardKrachFeil, JanJanFeilLederer, MartinMartinLederer2025-06-042025-06-042023https://publica.fraunhofer.de/handle/publica/4882332-s2.0-85178641838While artificial intelligence (AI) has become part of more and more areas of daily life - both private and business - this development has not yet progressed as far in the military sector. This is just changing with the development of new projects, such as the Future Combat Air System (FCAS) - a highly ambitious European defense project planned as a replacement of systems such as the Eurofighter from 2040 onwards. To facilitate and accelerate discussions on the ethical implications of the use of AI in the military domain, we developed the FCAS Ethical AI Demonstrator. We chose the Target Detection, Recognition, and Identification as one highly probable use case and implemented a simulation to showcase the ethical implications of the collaboration between the operator and an AI-assisted system in that application. To help the operator understand and assess the classifications of the used automatic target recognition, explanations of the AI results are computed with an Explainable AI (XAI) method and then provided in the user interface. With this hands-on demonstrator, we are pleased to contribute to the discussions on the ethical implications of the use of AI in military applications.enfalseEthical AIExplainable AIFuture Combat Air System (FCAS)Targeting CycleFCAS Ethical AI Demonstratorconference paper