Now showing 1 - 10 of 1531
  • Publication
    Kursbestimmung: Gaia-X und die Zukunft der datenzentrierten Verwaltung
    ( 2024)
    Brucke, Matthias
    ;
    Etminan, Ghazal
    ;
    Hofmann, Daniel
    ;
    Kraemer, Peter
    ;
    Krins, Tanja
    ;
    Leufkes, Ralf
    ;
    Lindner, René
    ;
    Löffler, Sven
    ;
    Lutz, Brigitte
    ;
    Meiners, Anna-Lena
    ;
    Pfaffenbichler, Xaver
    ;
    Pfahl, Bianca
    ;
    Radecki, Alanus von
    ;
    ; ;
    Schonowsk, Joachim
    ;
    Schöngut, Winnie
    ;
    Ta, Duy Phuong
    ;
    Tegtmeyer, Sascha
    ;
    Traunmüller, Martin
    ;
    Brucke, Matthias
    ;
    Schöngut, Winnie
    ;
    Siegfried, Tina
    ;
    Wienand, Karl
    Seit mindestens 20 Jahren entwickeln sich „Smart Cities“ und „Smart Regions“ als Teil eines Digitalisierungstrends, der auf Datenvernetzung in Städten und Regionen aufsetzt. Smart Cities und Smart Regions bezeichnen Siedlungsräume, in denen die regelmäßige Nutzung (ökologischer, ökonomischer und sozial) nachhaltiger Produkte, Dienstleistungen, Technologien, Prozesse und Infrastrukturen durch eine hochintegrierte Vernetzung mittels Informations- und Kommunikationstechnologien systematisch ermöglicht und unterstützt wird. Der nächste Schritt hin zu vernetzten, datenzentrierten Verwaltungen von Städten und Regionen ist daher von enormer Bedeutung. Auf der einen Seite wird der Einsatz datenbasierter Systeme die öffentlichen Dienstleistungen und die hoheitlichenAufgaben erweitern und verbessern. Auf der anderen Seite werden immer mehr öffentliche und private Akteure entlang (digitaler) Wertschöpfungsketten beteiligt. Unterstrichen wird dies durch den immer stärker hervortretenden Bedarf, nachhaltigere und resilientere Wege der Stadt- und Regionalentwicklung zu gehen und dazu digitale Werkzeuge und Lösungen zu nutzen. Hierdurch kann die notwendigen Transformationen des Energie-, Verkehrs- und Infrastrukturwesens beschleunigt werden. In diesem Dokument sollen die Konzepte von datenzentrierter Verwaltung sowie die dafür notwendigen organisatorischen und technischen Voraussetzungen erläutert und beschrieben werden. Abschnitt 1 bietet eine allgemeine Einführung in die datenzentrierte Verwaltung und adressiert die administrativen Herausforderungen sowie den rechtlichen Rahmen. Abschnitt 2 stellt die Gaia-X-Initiative vor, zunächst allgemein, dann als Werkzeug zur Verknüpfung von Daten und zur Erleichterung der digitalen Transformation in Kommunalverwaltungen. Abschnitt 3 beschreibt die technischen Aspekte der digitalen Transformation in der Verwaltung: von der aktuellen Datennutzung bis hin zu den verfügbaren technologischen und prozeduralen Werkzeugen. Am Ende des Dokuments finden sich einige Anwendungsfälle, in denen der Nutzen der Digitalisierung und Datenorientierung für Verwaltungen konkret dargestellt wird.
  • Publication
    Sensor-based characterization of construction and demolition waste at high occupancy densities using synthetic training data and deep learning
    Sensor-based monitoring of construction and demolition waste (CDW) streams plays an important role in recycling (RC). Extracted knowledge about the composition of a material stream helps identifying RC paths, optimizing processing plants and form the basis for sorting. To enable economical use, it is necessary to ensure robust detection of individual objects even with high material throughput. Conventional algorithms struggle with resulting high occupancy densities and object overlap, making deep learning object detection methods more promising. In this study, different deep learning architectures for object detection (Region-based CNN/Region-based Convolutional Neural Network (Faster R-CNN), You only look once (YOLOv3), Single Shot MultiBox Detector (SSD)) are investigated with respect to their suitability for CDW characterization. A mixture of brick and sand-lime brick is considered as an exemplary waste stream. Particular attention is paid to detection performance with increasing occupancy density and particle overlap. A method for the generation of synthetic training images is presented, which avoids time-consuming manual labelling. By testing the models trained on synthetic data on real images, the success of the method is demonstrated. Requirements for synthetic training data composition, potential improvements and simplifications of different architecture approaches are discussed based on the characteristic of the detection task. In addition, the required inference time of the presented models is investigated to ensure their suitability for use under real-time conditions.
  • Publication
    A Concept Study for Feature Extraction and Modeling for Grapevine Yield Prediction
    ( 2024)
    Huber, Florian
    ;
    Hofmann, Benedikt
    ;
    Engler, Hannes
    ;
    Gauweiler, Pascal
    ;
    ;
    Herzog, Katja
    ;
    Kicherer, Anna
    ;
    Töpfer, Reinhard
    ;
    ;
    Steinhage, Volker
    Yield prediction in viticulture is an especially challenging research direction within the field of yield prediction. The characteristics that determine annual grapevine yields are plentiful, difficult to obtain, and must be captured multiple times throughout the year. The processes currently used in grapevine yield prediction are based mainly on manually captured data and rigid statistical measures derived from historical insights. Experts for data acquisition are scarce, and statistical models cannot meet the requirements of a changing environment, especially in times of climate change. This paper contributes a concept on how to overcome those drawbacks, by (1) proposing a deep learning driven approach for feature recognition and (2) explaining how Extreme Gradient Boosting (XGBoost) can be utilized for yield prediction based on those features, while being explainable and computationally inexpensive. The methods developed will be influential for the future of yield prediction in viticulture.
  • Publication
    ML4CPS 2024 - Machine Learning for Cyber-Physical Systems
    (Helmut-Schmidt-Universität, 2024)
    Niggemann, Oliver
    ;
    ; ; ;
    Windmann, Alexander
    Cyber Physical Systems are characterized by their ability to adapt and learn from their environment. Applications include advanced condition monitoring, predictive maintenance, diagnosis tasks, and many other areas. All these applications have in common that Machine Learning and Artificial Intelligence are the key technologies. However, applying ML and AI to CPS poses challenges such as limited data, less understood algorithms, and the need for high algorithm reliability. These topics were a focal point at the 7th ML4CPS - Machine Learning for Cyber-Physical Systems Conference in Berlin, held from the 20th to the 21st, where industry and research experts discussed current advancements and new developments.
  • Publication
    Data sovereignty requirements for patient-oriented AI-driven clinical research in Germany
    ( 2024) ; ; ; ; ; ; ;
    Frank, Kevin
    ;
    Dauth, Stephanie
    ;
    Köhm, Michaela
    ;
    Orak, Berna
    ;
    Spiecker genannt Döhmann, Indra
    ;
    Böhm, Peter
    The rapidly growing quantity of health data presents researchers with ample opportunity for innovation. At the same time, exploitation of the value of Big Data poses various ethical challenges that must be addressed in order to fulfil the requirements of responsible research and innovation (Gerke et al. 2020; Howe III and Elenberg 2020). Data sovereignty and its principles of self-determination and informed consent are central goals in this endeavor. However, their consistent implementation has enormous consequences for the collection and processing of data in practice, especially given the complexity and growth of data in healthcare, which implies that artificial intelligence (AI) will increasingly be applied in the field due to its potential to unlock relevant, but previously hidden, information from the growing number of data (Jiang et al. 2017). Consequently, there is a need for ethically sound guidelines to help determine how data sovereignty and informed consent can be implemented in clinical research. Using the method of a narrative literature review combined with a design thinking approach, this paper aims to contribute to the literature by answering the following research question: What are the practical requirements for the thorough implementation of data sovereignty and informed consent in healthcare? We show that privacy-preserving technologies, human-centered usability and interaction design, explainable and trustworthy AI, user acceptance and trust, patient involvement, and effective legislation are key requirements for data sovereignty and self-determination in clinical research. We outline the implications for the development of IT solutions in the German healthcare system.
  • Publication
    Bridging the Gap Between IDS and Industry 4.0 - Lessons Learned and Recommendations for the Future
    ( 2024)
    Alexopoulos, Kosmas
    ;
    Bakopoulos, Emmanouil
    ;
    Larrinaga Barrenechea, Felix
    ;
    Castellvi, Silvia
    ;
    Firouzi, Farshad
    ;
    Luca, Gabriele de
    ;
    Maló, Pedro
    ;
    Marguglio, Angelo
    ;
    Meléndez, Francisco
    ;
    Meyer, Tom
    ;
    Orio, Giovanni di
    ;
    ;
    Ruíz, Jesús
    ;
    Treichel, Tagline
    ;
    ; ; ;
    The Plattform Industrie 4.0 (PI4.0) and the International Data Spaces Association (IDSA) are two independent, parallel initiatives with clear focuses. While PI4.0 addresses communication and interaction between networked assets in a smart factory and/or supply chain across an asset or product lifecycle, IDSA is about a secure, sovereign system of data sharing in which all stakeholders can realize the full value of their data. Since data sharing between companies requires both interoperability and data sovereignty, the question emerges regarding the feasibility and rationality of integrating the expertise of PI4.0 and IDSA. The IDS-Industrial Community (IDS-I) is an extension of IDSA whose goal is to strengthen the cooperation between IDSA and PI4.0. Two fields of expertise could be combined: The Platform's know-how in the area of Industrie 4.0 (I4.0) and the IDSA's expertise in the areas of data sharing ecosystems and data sovereignty. In order to realize this vision, many aspects have to be taken into account, as there are discrepancies on multiple levels. Specifically, at the reference architecture level, we have the RAMI4.0 model on the PI4.0 side and the IDS Reference Architecture Model (IDS-RAM) on the IDSA side. While the existing I4.0 and IDS specifications are incompatible e.g. in terms of models (i.e., the AAS metamodel and the IDS information model) and APIs, there is also the issue of interoperability between I4.0 and IDS solutions. This position paper aims to bridge the gap between IDS and PI4.0 by not only analyzing how their existing concepts, tools, etc. have been "connected" in different contexts. Rather, this position paper makes recommendations on how different technologies could be combined in a generic way, independent of the concrete implementation of IDS and/or I4.0 relevant technology components. This paper could be used by both the IDS and I4.0 communities to further improve their specifications, which are still under development. The lessons learned and feedback from the initial joint use of technology components from both areas could provide concrete guidance on necessary improvements that could further strengthen or extend the specifications. Furthermore, it could help to promote the IDS architecture and specifications in the industrial production and smart manufacturing community and extend typical PI4.0 use cases to include data sovereignty by incorporating IDS aspects.
  • Publication
    Attribute-Based Person Retrieval in Multi-Camera Networks
    Attribute-based person retrieval is a crucial component in various realworld applications, including surveillance, retail, and smart cities. Contrary to image-based person identification or re-identification, individuals are searched for based on descriptions of their soft biometric attributes, such as gender, age, and clothing colors. For instance, attribute-based person retrieval enables law enforcement agencies to efficiently search enormous amounts of surveillance footage gathered from multi-camera networks to locate suspects or missing persons. This thesis presents a novel deep learning framework for attribute-based person retrieval. The primary objective is to research a holistic approach that is suitable for real-world applications. Therefore, all necessary processing steps are covered. Pedestrian attribute recognition serves as the base framework to address attribute-based person retrieval in this thesis. Various design characteristics of pedestrian attribute recognition approaches are systematically examined toward their suitability for attribute-based person retrieval. Following this analysis, novel techniques are proposed and discussed to further improve the performance. The PARNorm module is introduced to normalize the model’s output logits across both the batch and attribute dimensions to compensate for imbalanced attributes in the training data and improve person retrieval performance simultaneously. Strategies for video-based pedestrian attribute recognition are explored, given that videos are typically available instead of still images. Temporal pooling of the backbone features over time proves to be effective for the task. Additionally, this approach exhibits faster inference than alternative techniques. To enhance the reliability of attributebased person retrieval rankings and address common challenges such as occlusions, an independent hardness predictor is proposed that predicts the difficulty of recognizing attributes in an image. This information is utilized to remarkably improve retrieval results by down-weighting soft biometrics with an increased chance of classification failure. Additionally, three further enhancements to the retrieval process are investigated, including model calibration based on existing literature, a novel attribute-wise error weighting mechanism to balance the attributes’ influence on retrieval results, and a new distance measure that relies on the output distributions of the attribute classifier. Meaningful generalization experiments on pedestrian attribute recognition and attribute-based person retrieval are enabled for the first time. For this purpose, the UPAR dataset is proposed, which contributes 3.3 million binary annotations to harmonize semantic attributes across four existing datasets and introduces two evaluation protocols. Moreover, a new evaluation metric is suggested that is tailored to the task of attribute-based person retrieval. This metric evaluates the overlap between query attributes and the attributes of the retrieved samples to obtain scores that are consistent with the human perception of a person retrieval ranking. Combining the proposed approaches yields substantial improvements in both pedestrian attribute recognition and attribute-based person retrieval. State-of-the-art performance is achieved concerning both tasks and existing methods from the literature are surpassed. The findings are consistent across both specialization and generalization settings and across the well-established research datasets. Finally, the entire processing pipeline, from video feeds to the resulting retrieval rankings, is outlined. This encompasses a brief discussion on the topic of multi-target multi-camera tracking.
  • Publication
    Regression via causally informed Neural Networks
    ( 2024)
    Youssef, Shahenda
    ;
    Doehner, Frank
    ;
    Neural Networks have been successful in solving complex problems across various fields. However, they require significant data to learn effectively, and their decision-making process is often not transparent. To overcome these limitations, causal prior knowledge can be incorporated into neural network models. This knowledge improves the learning process and enhances the robustness and generalizability of the models. We propose a novel framework RCINN that involves calculating the inverse probability of treatment weights given a causal graph model alongside the training dataset. These weights are then concatenated as additional features in the neural network model. Then incorporating the estimated conditional average treatment effect as a regularization term to the model loss function, the potential influence of confounding variables can be mitigated, leading to bias minimization and improving the neural network model. Experiments conducted on synthetic and benchmark datasets using the framework show promising results.
  • Publication
    Depth estimation and 3D reconstruction from UAV-borne imagery: Evaluation on the UseGeo dataset
    ( 2024) ;
    Weinmann, Martin
    ;
    Nex, Francesco
    ;
    Stathopoulou, E.K.
    ;
    Remondino, F.
    ;
    Jutzi, Boris
    ;
    Depth estimation and 3D model reconstruction from aerial imagery is an important task in photogrammetry, remote sensing, and computer vision. To compare the performance of different image-based approaches, this study presents a benchmark for UAV-based aerial imagery using the UseGeo dataset. The contributions include the release of various evaluation routines on GitHub, as well as a comprehensive comparison of baseline approaches, such as methods for offline multi-view 3D reconstruction resulting in point clouds and triangle meshes, online multi-view depth estimation, as well as single-image depth estimation using self-supervised deep learning. With the release of our evaluation routines, we aim to provide a universal protocol for the evaluation of depth estimation and 3D reconstruction methods on the UseGeo dataset. The conducted experiments and analyses show that each method excels in a different category: the depth estimation from COLMAP outperforms that of the other approaches, ACMMP achieves the lowest error and highest completeness for point clouds, while OpenMVS produces triangle meshes with the lowest error. Among the online methods for depth estimation, the approach from the Plane-Sweep Library outperforms the FaSS-MVS approach, while the latter achieves the lowest processing time. And even though the particularly challenging nature of the dataset and the small amount of training data leads to a significantly higher error in the results of the self-supervised single-image depth estimation approach, it outperforms all other approaches in terms of processing time and frame rate. In our evaluation, we have also considered modern learning-based approaches that can be used for image-based 3D reconstruction, such as NeRFs. However, due to the significantly lower quality of the resulting 3D models, we have only included a qualitative comparison between NeRF-based and conventional approaches in the scope of this work.