Now showing 1 - 8 of 8
  • Publication
    Informed Pre-Training on Prior Knowledge
    When training data is scarce, the incorporation of additional prior knowledge can assist the learning process. While it is common to initialize neural networks with weights that have been pre-trained on other large data sets, pre-training on more concise forms of knowledge has rather been overlooked. In this paper, we propose a novel informed machine learning approach and suggest to pre-train on prior knowledge. Formal knowledge representations, e.g. graphs or equations, are first transformed into a small and condensed data set of knowledge prototypes. We show that informed pre-training on such knowledge prototypes (i) speeds up the learning processes, (ii) improves generalization capabilities in the regime where not enough training data is available, and (iii) increases model robustness. Analyzing which parts of the model are affected most by the prototypes reveals that improvements come from deeper layers that typically represent high-level features. This confirms that informed pre-training can indeed transfer semantic knowledge. This is a novel effect, which shows that knowledge-based pre-training has additional and complementary strengths to existing approaches.
  • Publication
    Informed Machine Learning - A Taxonomy and Survey of Integrating Knowledge into Learning Systems
    Despite its great success, machine learning can have its limits when dealing with insufficient training data. A potential solution is the additional integration of prior knowledge into the training process, which leads to the notion of informed machine learning. In this paper, we present a structured overview of various approaches in this field. First, we provide a definition and propose a concept for informed machine learning, which illustrates its building blocks and distinguishes it from conventional machine learning. Second, we introduce a taxonomy that serves as a classification framework for informed machine learning approaches. It considers the source of knowledge, its representation, and its integration into the machine learning pipeline. Third, we survey related research and describe how different knowledge representations such as algebraic equations, logic rules, or simulation results can be used in learning systems. This evaluation of numerous papers on the basis of our taxonomy uncovers key methods in the field of informed machine learning.
  • Publication
    Recurrent Adversarial Service Times
    Service system dynamics occur at the interplay between customer behaviour and a service provider's response. This kind of dynamics can effectively be modeled within the framework of queuing theory where customers' arrivals are described by point process models. However, these approaches are limited by parametric assumptions as to, for example, inter-event time distributions. In this paper, we address these limitations and propose a novel, deep neural network solution to the queuing problem. Our solution combines a recurrent neural network that models the arrival process with a recurrent generative adversarial network which models the service time distribution. We evaluate our methodology on various empirical datasets ranging from internet services (Blockchain, GitHub, Stackoverflow) to mobility service systems (New York taxi cab).
  • Publication
    Neural conditional gradients
    ( 2018)
    Schramowski, Patrick
    ;
    ;
    The move from hand-designed to learned optimizers in machine learning has been quite successful for gradient-based and -free optimizers. When facing a constrained problem, however, maintaining feasibility typically requires a projection step, which might be computationally expensive and not differentiable. We show how the design of projection-free convex optimization algorithms can be cast as a learning problem based on Frank-Wolfe Networks: recurrent networks implementing the Frank-Wolfe algorithm aka. conditional gradients. This allows them to learn to exploit structure when, e.g., optimizing over rank-1 matrices. Our LSTM-learned optimizers outperform hand-designed as well learned but unconstrained ones. We demonstrate this for training support vector machines and softmax classifiers.
  • Publication
    Maximum Entropy Models of Shortest Path and Outbreak Distributions in Networks
    Properties of networks are often characterized in terms of features such as node degree distributions, average path lengths, diameters, or clustering coefficients. Here, we study shortest path length distributions. On the one hand, average as well as maximum distances can be determined therefrom; on the other hand, they are closely related to the dynamics of network spreading processes. Because of the combinatorial nature of networks, we apply maximum entropy arguments to derive a general, physically plausible model. In particular, we establish the generalized Gamma distribution as a continuous characterization of shortest path length histograms of networks or arbitrary topology. Experimental evaluations corroborate our theoretical results.
  • Publication
    GeoDBLP: Geo-tagging DBLP for mining the sociology of computer science
    Many collective human activities have been shown to exhibit universal patterns. However, the possibility of universal patterns across timing events of researcher migration has barely been explored at global scale. Here, we show that timing events of migration within different countries exhibit remarkable similarities. Specifically, we look at the distribution governing the data of researcher migration inferred from the web. Compiling the data in itself represents a significant advance in the field of quantitative analysis of migration patterns. Official and commercial records are often access restricted, incompatible between countries, and especially not registered across researchers. Instead, we introduce GeoDBLP where we propagate geographical seed locations retrieved from the web across the DBLP database of 1,080,958 authors and 1,894,758 papers. But perhaps more important is that we are able to find statistical patterns and create models that explain the migration of researchers. For instance, we show that the science job market can be treated as a Poisson process with individual propensities to migrate following a log-normal distribution over the researcher's career stage. That is, although jobs enter the market constantly, researchers are generally not "memoryless" but have to care greatly about their next move. The propensity to make k>1 migrations, however, follows a gamma distribution suggesting that migration at later career stages is "memoryless". This aligns well but actually goes beyond scientometric models typically postulated based on small case studies. On a very large, transnational scale, we establish the first general regularities that should have major implications on strategies for education and research worldwide.
  • Publication
    Efficient information theoretic clustering on discrete lattices
    We consider the problem of clustering data that reside on discrete, low dimensional lattices. Canonical examples for this setting are found in image segmentation and key point extraction. Our solution is based on a recent approach to information theoretic clustering where clusters result from an iterative procedure that minimizes a divergence measure. We replace costly processing steps in the original algorithm by means of convolutions. These allow for highly efficient implementations and thus significantly reduce runtime. This paper therefore bridges a gap between machine learning and signal processing.
  • Publication
    Computing the Kullback-Leibler divergence between two Weibull distributions
    We derive a closed form solution for the Kullback-Leiblerdivergence between two Weibull distributions. These notes are meant as reference material and intended to provide a guided tour towards a result that is often mentioned but seldom made explicit in the literature.