Now showing 1 - 10 of 81
PublicationLearning Weakly Convex Sets in Metric Spaces( 2021-09-10)
;Stadtländer, Eike ;We introduce the notion of weak convexity in metric spaces, a generalization of ordinary convexity commonly used in machine learning. It is shown that weakly convex sets can be characterized by a closure operator and have a unique decomposition into a set of pairwise disjoint connected blocks. We give two generic efficient algorithms, an extensional and an intensional one for learning weakly convex concepts and study their formal properties. Our experimental results concerning vertex classification clearly demonstrate the excellent predictive performance of the extensional algorithm. Two non-trivial applications of the intensional algorithm to polynomial PAC-learnability are presented. The first one deals with learning k-convex Boolean functions, which are already known to be efficiently PAC-learnable. It is shown how to derive this positive result in a fairly easy way by the generic intensional algorithm. The second one is concerned with the Euclidean space equipped with the Manhattan distance. For this metric space, weakly convex sets form a union of pairwise disjoint axis-aligned hyperrectangles. We show that a weakly convex set that is consistent with a set of examples and contains a minimum number of hyperrectangles can be found in polynomial time. In contrast, this problem is known to be NP-complete if the hyperrectangles may be overlapping.
PublicationDecision Snippet Features( 2021-05-05)
;Welke, Pascal ;Alkhoury, Fouad ;Decision trees excel at interpretability of their prediction results. To achieve required prediction accuracies, however, often large ensembles of decision trees random forests are considered, reducing interpretability due to large size. Additionally, their size slows down inference on modern hardware and restricts their applicability in low-memory embedded devices. We introduce Decision Snippet Features, which are obtained from small subtrees that appear frequently in trained random forests. We subsequently show that linear models on top of these features achieve comparable and sometimes even better predictive performance than the original random forest, while reducing the model size by up to two orders of magnitude.
PublicationLeitfaden zur Gestaltung vertrauenswürdiger Künstlicher Intelligenz (KI-Prüfkatalog)(Fraunhofer IAIS, 2021)
; ; ; ; ; ;Cremers, Armin B. ; ; ; ; ;Sicking, Joachim ; ;
PublicationMaximum Margin Separations in Finite Closure Systems( 2021)
;Seiffahrt, Florian ;Monotone linkage functions provide a measure for proximities between elements and subsets of a ground set. Combining this notion with Vapniks idea of support vector machines, we extend the concepts of maximal closed set and half-space separation in finite closure systems to those with maximum margin. In particular, we define the notion of margin for finite closure systems by means of monotone linkage functions and give a greedy algorithm computing a maximum margin closed set separation for two sets efficiently. The output closed sets are maximum margin half-spaces, i.e., form a partitioning of the ground set if the closure system is Kakutani. We have empirically evaluated our approach on different synthetic datasets. In addition to binary classification of finite subsets of the Euclidean space, we considered also the problem of vertex classification in graphs. Our experimental results provide clear evidence that maximal closed set separation with maximum margin results in a much better predictive performance than that with arbitrary maximal closed sets.
PublicationConstructing Spaces and Times for Tactical Analysis in Football( 2021)
;Andrienko, Gennady ;Andrienko, Natalia ;Anzer, Gabriel ;Bauer, Pascal ;Budziak, Guido ; ; ;Weber, HendrikA possible objective in analyzing trajectories of multiple simultaneously moving objects, such as football players during a game, is to extract and understand the general patterns of coordinated movement in different classes of situations as they develop. For achieving this objective, we propose an approach that includes a combination of query techniques for flexible selection of episodes of situation development, a method for dynamic aggregation of data from selected groups of episodes, and a data structure for representing the aggregates that enables their exploration and use in further analysis. The aggregation, which is meant to abstract general movement patterns, involves construction of new time-homomorphic reference systems owing to iterative application of aggregation operators to a sequence of data selections. As similar patterns may occur at different spatial locations, we also propose constructing new spatial reference systems for aligning and matching movements irrespective of their absolute locations. The approach was tested in application to tracking data from two Bundesliga games of the 2018/2019 season. It enabled detection of interesting and meaningful general patterns of team behaviors in three classes of situations defined by football experts. The experts found the approach and the underlying concepts worth implementing in tools for football analysts.
PublicationA Novel Regression Loss for Non-Parametric Uncertainty Optimization( 2021)
;Sicking, Joachim ; ;Pintz, Maximilian ; ;Fischer, AsjaQuantification of uncertainty is one of the most promising approaches to establish safe machine learning. Despite its importance, it is far from being generally solved, especially for neural networks. One of the most commonly used approaches so far is Monte Carlo dropout, which is computationally cheap and easy to apply in practice. However, it can underestimate the uncertainty. We propose a new objective, referred to as second-moment loss (SML), to address this issue. While the full network is encouraged to model the mean, the dropout networks are explicitly used to optimize the model variance. We intensively study the performance of the new objective on various UCI regression datasets. Comparing to the state-of-the-art of deep ensembles, SML leads to comparable prediction accuracies and uncertainty estimates while only requiring a single model. Under distribution shift, we observe moderate improvements. As a side result, we introduce an intuitive Wasserstein distance-based uncertainty measure that is non-saturating and thus allows to resolve quality differences between any two uncertainty estimates.
PublicationHOPS: Probabilistic Subtree Mining for Small and Large Graphs( 2020)
;Welke, Pascal ;Seiffahrt, Florian ;Frequent subgraph mining, i.e., the identification of relevant patterns in graph databases, is a well-known data mining problem with high practical relevance, since next to summarizing the data, the resulting patterns can also be used to define powerful domain-specific similarity functions for prediction. In recent years, significant progress has been made towards subgraph mining algorithms that scale to complex graphs by focusing on tree patterns and probabilistically allowing a small amount of incompleteness in the result. Nonetheless, the complexity of the pattern matching component used for deciding subtree isomorphism on arbitrary graphs has significantly limited the scalability of existing approaches. In this paper, we adapt sampling techniques from mathematical combinatorics to the problem of probabilistic subtree mining in arbitrary databases of many small to medium-size graphs or a single large graph. By restricting on tree patterns, we provide an algorithm tha t approximately counts or decides subtree isomorphism for arbitrary transaction graphs in sub-linear time with one-sided error. Our empirical evaluation on a range of benchmark graph datasets shows that the novel algorithm substantially outperforms state-of-the-art approaches both in the task of approximate counting of embeddings in single large graphs and in probabilistic frequent subtree mining in large databases of small to medium sized graphs.
PublicationVisual Analytics for Data Scientists(Springer Nature, 2020)
;Andrienko, Natalia ;Andrienko, Gennady ; ;Slingsby, Aidan ;Turkay, Cagatay
PublicationZertifizierung von KI-Systemen. Kompass für die Entwicklung und Anwendung vertrauenswürdiger KI-Systeme
PublicationZertifizierung von KI-SystemenDie Zertifizierung von Künstlicher Intelligenz (KI) gilt als eine mögliche Schlüsselvoraussetzung, um den Einsatz von KI-Systemen in verschiedenen Wirtschafts- und Lebensbereichen voranzutreiben. Für eine Vielzahl von KI-Systemen kann eine Zertifizierung dazu beitragen, das gesellschaftliche Nutzenpotential sicher und gemeinwohlorientiert auszuschöpfen. Eine gelungene Zertifizierung von KI-Systemen ermöglicht die Erfüllung wichtiger gesellschaftlicher und ökonomischer Prinzipien, wie etwa Rechtssicherheit (z. B. Haftung und Entschädigung), Interoperabilität, IT-Sicherheit oder Datenschutz. Zudem kann sie bei Bürgerinnen und Bürgern Vertrauen schaffen, zu besseren Produkten führen und die nationale und internationale Marktdynamik beeinflussen. Damit sich Zertifizierungsverfahren aber nicht als Innovationshemmnis erweisen, gilt es, bestimmte Standards von KI-Systemen zu garantieren, Überregulierung zu vermeiden, Innovation zu ermöglichen und bestenfalls neue Entwicklungen für einen europäischen Weg in der KI-Anwendung auslösen zu können. Das Spannungsfeld aus Potentialen und Herausforderungen bei der Zertifizierung von KI-Systemen haben Expertinnen und Experten der Plattform Lernende Systeme in vorliegendem Impulspapier systematisiert. Das Papier, das unter der Leitung der Arbeitsgruppe IT-Sicherheit, Privacy, Recht und Ethik sowie der Arbeitsgruppe Technologische Wegbereiter und Data Science entstand, beleuchtet verschiedene technische, juristische und ethische Herausforderungen und gibt zudem einen Überblick über bestehende Initiativen zur Zertifizierung von KI-Systemen in Deutschland.