Now showing 1 - 10 of 15
  • Publication
    Topic modelling for spatial insights: Uncovering space use from movement data
    ( 2024-08-01)
    Andriyenko, Gennadiy
    ;
    Andriyenko, Nathaliya
    ;
    We present a novel approach to understanding space use by moving entities based on repeated patterns of place visits and transitions. Our approach represents trajectories as text documents consisting of sequences of place visits or transitions and applies topic modelling to the corpus of these documents. The resulting topics represent combinations of places or transitions, respectively, that repeatedly co-occur in trips. Visualisation of the results in the spatial context reveals the regions of place connectivity through movements and the major channels used to traverse the space. This enables understanding of the use of space as a medium for movement. We compare the possibilities provided by topic modelling to alternative approaches exploiting a numeric measure of pairwise connectedness. We have extensively explored the potential of utilising topic modelling by applying our approach to multiple real-world movement data sets with different data collection procedures and varying spatial and temporal properties: GPS road traffic of cars, unconstrained movement on a football pitch, and episodic movement data reflecting social media posting events. The approach successfully demonstrated the ability to uncover meaningful patterns and interesting insights. We thoroughly discuss different aspects of the approach and share the knowledge and experience we have gained with people who might be potentially interested in analysing movement data by means of topic modelling methods.
  • Publication
    Big Data 2.0 - mit synthetischen Daten KI-Systeme stärken
    Bei der Anwendung von Künstlicher Intelligenz (KI) sind fehlende Daten immer noch eine Kernherausforderung und die Kosten zur Beschaffung ein kritischer Faktor für die Wirtschaftlichkeit vieler Geschäftsmodelle. Synthetische, also künstlich generierte Daten bilden einen Ausweg. Ein vielversprechender Lösungsansatz besteht darin, für die Datensynthese selbst ein KI-Modell einzusetzen.
  • Publication
    The why and how of trustworthy AI
    Artificial intelligence is increasingly penetrating industrial applications as well as areas that affect our daily lives. As a consequence, there is a need for criteria to validate whether the quality of AI applications is sufficient for their intended use. Both in the academic community and societal debate, an agreement has emerged under the term “trustworthiness” as the set of essential quality requirements that should be placed on an AI application. At the same time, the question of how these quality requirements can be operationalized is to a large extent still open. In this paper, we consider trustworthy AI from two perspectives: the product and organizational perspective. For the former, we present an AI-specific risk analysis and outline how verifiable arguments for the trustworthiness of an AI application can be developed. For the second perspective, we explore how an AI management system can be employed to assure the trustworthiness of an organization with respect to its handling of AI. Finally, we argue that in order to achieve AI trustworthiness, coordinated measures from both product and organizational perspectives are required.
  • Publication
    Sprachversteher
    Tiefe neuronale Sprachmodelle wie GPT-3 schreiben ansprechende Texte, garnieren sie aber oft mit erfundenen Fakten. Jüngste Modelle überprüfen ihre Inhalte selbst und könnten so schon bald Hausaufgaben oder News generieren. Ein Einblick in die Entwicklung.
  • Publication
    Supporting Visual Exploration of Iterative Job Scheduling
    ( 2022-03-30)
    Andrienko, Gennady
    ;
    Andrienko, Natalia
    ;
    Garcia, Jose Manuel Cordero
    ;
    ;
    Vouros, George A.
    We consider the general problem known as job shop scheduling, in which multiple jobs consist of sequential operations that need to be executed or served by appropriate machines having limited capacities. For example, train journeys (jobs) consist of moves and stops (operations) to be served by rail tracks and stations (machines). A schedule is an assignment of the job operations to machines and times where and when they will be executed. The developers of computational methods for job scheduling need tools enabling them to explore how their methods work. At a high level of generality, we define the system of pertinent exploration tasks and a combination of visualizations capable of supporting the tasks. We provide general descriptions of the purposes, contents, visual encoding, properties, and interactive facilities of the visualizations and illustrate them with images from an example implementation in air traffic management. We justify the design of the visualizations based on the tasks, principles of creating visualizations for pattern discovery, and scalability requirements. The outcomes of our research are sufficiently general to be of use in a variety of applications.
  • Publication
    Vom Textgenerator zum digitalen Experten
    Neue Sprachprogramme wie GPT-3 geben Maschinen nicht nur ein menschenähnliches Sprachgefühl, sondern sollen sie zugleich zu Fachleuten machen können. Was steckt dahinter? Und kann das gelingen?
  • Publication
    Constructing Spaces and Times for Tactical Analysis in Football
    ( 2021)
    Andrienko, Gennady
    ;
    Andrienko, Natalia
    ;
    Anzer, Gabriel
    ;
    Bauer, Pascal
    ;
    Budziak, Guido
    ;
    ; ;
    Weber, Hendrik
    ;
    A possible objective in analyzing trajectories of multiple simultaneously moving objects, such as football players during a game, is to extract and understand the general patterns of coordinated movement in different classes of situations as they develop. For achieving this objective, we propose an approach that includes a combination of query techniques for flexible selection of episodes of situation development, a method for dynamic aggregation of data from selected groups of episodes, and a data structure for representing the aggregates that enables their exploration and use in further analysis. The aggregation, which is meant to abstract general movement patterns, involves construction of new time-homomorphic reference systems owing to iterative application of aggregation operators to a sequence of data selections. As similar patterns may occur at different spatial locations, we also propose constructing new spatial reference systems for aligning and matching movements irrespective of their absolute locations. The approach was tested in application to tracking data from two Bundesliga games of the 2018/2019 season. It enabled detection of interesting and meaningful general patterns of team behaviors in three classes of situations defined by football experts. The experts found the approach and the underlying concepts worth implementing in tools for football analysts.
  • Publication
    Informed Machine Learning for Industry
    Deep neural networks have pushed the boundaries of artificial intelligence but their training requires vast amounts of data and high performance hardware. While truly digitised companies easily cope with these prerequisites, traditional industries still often lack the kind of data or infrastructures the current generation of end-to-end machine learning depends on. The Fraunhofer Center for Machine Learning therefore develops novel solutions which are informed by expert knowledge. These typically require less training data and are more transparent in their decision-making processes.
  • Publication
    Gefragte Profis Data Science für Ingenieure
    Im Zuge von Digitalisierung und Automatisierung werden nicht nur Prozesse und Strukturen völlig neu gestaltet. Auch die Berufsbilder verändern sich, erfordern andere Kompetenzen. Arbeits- und Tätigkeitsprofile entstehen neu. Wie schon beim Dotcom-Boom 1997 herrscht aktuell ein Kampf um Talente. Speziell Data-Science-Kompetenzen sind auf dem Arbeitsmarkt gefragter denn je. Der Fokus liegt dabei nicht nur auf den IT-, sondern auch auf kommunikativen und interdisziplinären Fähigkeiten. Der Innovationsdruck verlangt eine permanente Anpassung: IT-Wissen veraltet schneller als noch vor fünf oder zehn Jahren.
  • Publication
    Edge Computing aus Sicht der Künstlichen Intelligenz
    Dieser Beitrag stellt die Schlüsseltechnologie der modernen KI vor: das maschinelle Lernen (ML) und speziell das Lernen mit künstlichen neuronalen Netzen. Er erklärt, wie ein solches Modell unmittelbar an den Orten der Datenentstehung gelernt werden kann ganz ohne Kommunikation von Rohdaten. Dieses Paradigma wird als verteiltes Lernen oder kurz Lernen an der Edge bezeichnet, im Gegensatz zum heute vorherrschenden Lernen in der Cloud. Künstliche Intelligenz ist in den letzten Jahren in unseren Alltag eingezogen, in Form von Sprachassistenten und Übersetzern, Objekt- und Gesichtserkennung, Produktempfehlungen und personalisierten Informationen. Die gemeinsame Technik hinter all diesen Fähigkeiten ist das maschinelle Lernen. Gemeinsamer Enabler von maschinellem Lernen und Bi g Data ist die nahezu exponentiell wachsende Verfügbarkeit an Ressourcen wie Rechenleistung und Speicherkapazität.