Now showing 1 - 9 of 9
  • Publication
    Quantum Circuit Evolution on NISQ Devices
    Variational quantum circuits build the foundation for various classes of quantum algorithms. In a nutshell, the weights of a parametrized quantum circuit are varied until the empirical sampling distribution of the circuit is sufficiently close to a desired outcome. Numerical first-order methods are applied frequently to fit the parameters of the circuit, but most of the time, the circuit itself, that is, the actual composition of gates, is fixed. Methods for optimizing the circuit design jointly with the weights have been proposed, but empirical results are rather scarce. Here, we consider a simple evolutionary strategy that addresses the trade-off between finding appropriate circuit architectures and parameter tuning. We evaluate our method both via simulation and on actual quantum hardware. Our benchmark problems include the transverse field Ising Hamiltonian and the Sherrington-Kirkpatrick spin model. Despite the shortcomings of current noisy intermediate-scale quantum hardware, we find only a minor slowdown on actual quantum machines compared to simulations. Moreover, we investigate which mutation operations most significantly contribute to the optimization. The results provide intuition on how randomized search heuristics behave on actual quantum hardware and lay out a path for further refinement of evolutionary quantum gate circuits.
  • Publication
    Advances in Password Recovery Using Generative Deep Learning Techniques
    Password guessing approaches via deep learning have recently been investigated with significant breakthroughs in their ability to generate novel, realistic password candidates. In the present work we study a broad collection of deep learning and probabilistic based models in the light of password guessing: attention-based deep neural networks, autoencoding mechanisms and generative adversarial networks. We provide novel generative deep-learning models in terms of variational autoencoders exhibiting state-of-art sampling performance, yielding additional latent-space features such as interpolations and targeted sampling. Lastly, we perform a thorough empirical analysis in a unified controlled framework over well-known datasets (RockYou, LinkedIn, MySpace, Youku, Zomato, Pwnd). Our results not only identify the most promising schemes driven by deep neural networks, but also illustrate the strengths of each approach in terms of generation variability and sample uniqueness.
  • Publication
    Auto Encoding Explanatory Examples with Stochastic Paths
    In this paper we ask for the main factors that determine a classifiers decision making process and uncover such factors by studying latent codes produced by auto-encoding frameworks. To deliver an explanation of a classifiers behaviour, we propose a method that provides series of examples highlighting semantic differences between the classifiers decisions. These examples are generated through interpolations in latent space. We introduce and formalize the notion of a semantic stochastic path, as a suitable stochastic process defined in feature (data) space via latent code interpolations. We then introduce the concept of semantic Lagrangians as a way to incorporate the desired classifiers behaviour and find that the solution of the associated variational problem allows for highli ghting differences in the classifier decision. Very importantly, within our framework the classifier is used as a black-box, and only its evaluation is required.
  • Publication
    Switching Dynamical Systems with Deep Neural Networks
    The problem of uncovering different dynamical regimes is of pivotal importance in time series analysis. Switching dynamical systems provide a solution for modeling physical phenomena whose time series data exhibit different dynamical modes. In this work we propose a novel variational RNN model for switching dynamics allowing for both non-Markovian and nonlinear dynamical behavior between and within dynamic modes. Attention mechanisms are provided to inform the switching distribution. We evaluate our model on synthetic and empirical datasets of diverse nature and successfully uncover different dynamical regimes and predict the switching dynamics.
  • Publication
    Learning Deep Generative Models for Queuing Systems
    Modern society is heavily dependent on large scale client-server systems with applications ranging from Internet and Communication Services to sophisticated logistics and deployment of goods. To maintain and improve such a system, a careful study of client and server dynamics is needed e.g. response/service times, aver-age number of clients at given times, etc. To this end, one traditionally relies, within the queuing theory formalism, on parametric analysis and explicit distribution forms. However, parametric forms limit the models expressiveness and could struggle on extensively large datasets. We propose a novel data-driven approach towards queuing systems: the Deep Generative Service Times. Our methodology delivers a flexible and scalable model for service and response times. We leverage the representation capabilities of Recurrent Marked Point Processes for the temporal dynamics of clients, as well as Wasserstein Generative Adversarial Network techniques, to learn deep generative models which are able to represent complex conditional service time distributions. We provide extensive experimental analysis on both empirical and synthetic datasets, showing the effectiveness of the proposed models.
  • Publication
    Recurrent Point Review Models
    Deep neural network models represent the state-of-the-art methodologies for natural language processing. Here we build on top of these methodologies to incorporate temporal information and model how review data changes with time. Specifically, we use the dynamic representations of recurrent point process models, which encode the history of how business or service reviews are received in time, to generate instantaneous language models with improved prediction capabilities. Simultaneously, our methodologies enhance the predictive power of our point process models by incorporating summarized review content representations. We provide recurrent network and temporal convolution solutions for modeling the review content. We deploy our methodologies in the context of recommender systems, effectively characterizing the change in preference and taste of users as time evolves. Source code is available at [1].
  • Publication
    On Learning a Control System without Continuous Feedback
    ( 2020)
    Angelov, Georgi
    ;
    We discuss a class of control problems by means of deep neural networks (DNN). Our goal is to develop DNN models that, once trained, are able to produce solutions of such problems at an acceptable error-rate and much faster computation time than an ordinary numerical solver. In the present note we study two such models for the Brockett integrator control problem.
  • Publication
    Explorations in Quantum Neural Networks with Intermediate Measurements
    ( 2020)
    Franken, Lukas
    ;
    In this short note we explore a few quantum circuits with the particular goal of basic image recognition. The models we study are inspired by recent progress in Quantum Convolution Neural Networks(QCNN) [12]. We present a few experimental results, where we attempt to learn basic image patterns motivated by scaling down the MNIST dataset.
  • Publication
    RatVec: A General Approach for Low-dimensional Distributed Vector Representations via Rational Kernels
    ( 2019)
    Brito, Eduardo
    ;
    ;
    Domingo-Fernández, Daniel
    ;
    Hoyt, Charles Tapley
    ;
    We present a general framework, RatVec, for learning vector representations of non-numeric entities based on domain-specific similarity functions interpreted as rational kernels. We show competitive performance using k-nearest neighbors in the protein family classification task and in Dutch spelling correction. To promote re-usability and extensibility, we have made our code and pre-trained models available athttps://github.com/ratvec.