Options
Prof. Dr.
Wrobel, Stefan
Now showing
1  7 of 7

PublicationDecision Snippet Features( 2021)
;Welke, Pascal ;Alkhoury, FouadDecision trees excel at interpretability of their prediction results. To achieve required prediction accuracies, however, often large ensembles of decision trees random forests are considered, reducing interpretability due to large size. Additionally, their size slows down inference on modern hardware and restricts their applicability in lowmemory embedded devices. We introduce Decision Snippet Features, which are obtained from small subtrees that appear frequently in trained random forests. We subsequently show that linear models on top of these features achieve comparable and sometimes even better predictive performance than the original random forest, while reducing the model size by up to two orders of magnitude. 
PublicationAdiabatic Quantum Computing for MaxSum Diversification( 2020)The combinatorial problem of maxsum diversification asks for a maximally diverse subset of a given set of data. Here, we show that it can be expressed as an Ising energy minimization problem. Given this result, maxsum diversification can be solved on adiabatic quantum computers and we present proof of concept simulations which support this claim. This, in turn, suggests that quantum computing might play a role in data mining. We therefore discuss quantum computing in a tutorial like manner and elaborate on its current strengths and weaknesses for data analysis.

PublicationA QUBO Formulation of the kMedoids Problem( 2019)
;Piatkowski, NicoWe are concerned with kmedoids clustering and propose aquadratic unconstrained binary optimization (QUBO) formulation of the problem of identifying k medoids among n data points without having to cluster the data. Given our QUBO formulation of this NPhard problem, it should be possible to solve it on adiabatic quantum computers. 
PublicationLeveraging Domain Knowledge for Reinforcement Learning using MMC Architectures( 2019)
;SchÃ¼cker, JannisDespite the success of reinforcement learning methods in various simulated robotic applications, endtoend training suffers from extensive training times due to high sample complexity and does not scale well to realistic systems. In this work, we speed up reinforcement learning by incorporating domain knowledge into policy learning. We revisit an architecture based on the mean of multiple computations (MMC) principle known from computational biology and adapt it to solve a reacher task. We approximate the policy using a simple MMC network, experimentally compare this idea to endtoend deep learning architectures, and show that our approach reduces the number of interactions required to approximate a suitable policy by a factor of ten. 
PublicationMaxSum Dispersion via Quantum Annealing( 2019)We devise an Ising model for the maxsum dispersion problem which occurs in contexts such as Web search or text summarization. Given this Ising model, maxsum dispersion can be solved on adiabatic quantum computers; in proof of concept simulations, we solve the corresponding SchrÃ¶dinger equations and observe our approach to work well.

PublicationIsing models for binary clustering via adiabatic quantum computing( 2018)
;Brito, E. ;Ojeda,CÃ©sarExisting adiabatic quantum computers are tailored towards minimizing the energies of Ising models. The quest for implementations of pattern recognition or machine learning algorithms on such devices can thus be seen as the quest for Ising model (re)formulations of their objective functions. In this paper, we present Ising models for the tasks of binary clustering of numerical and relational data and discuss how to set up corresponding quantum registers and Hamiltonian operators. In simulation experiments, we numerically solve the respective SchrÃ¶dinger equations and observe our approaches to yield convincing results. 
PublicationUsing echo state networks for cryptography( 2017)
;Ramamurthy, Rajkumar ;Buza, KrisztianEcho state networks are simple recurrent neural networks that are easy to implement and train. Despite their simplicity, they show a form of memory and can predict or regenerate sequences of data. We make use of this property to realize a novel neural cryptography scheme. The key idea is to assume that Alice and Bob share a copy of an echo state network. If Alice trains her copy to memorize a message, she can communicate the trained part of the network to Bob who plugs it into his copy to regenerate the message. Considering a bytelevel representation of in and output, the technique applies to arbitrary types of data (texts, images, audio files, etc.) and practical experiments reveal it to satisfy the fundamental cryptographic properties of diffusion and confusion.