Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

DenseHMM: Learning Hidden Markov Models by Learning Dense Representations

Paper presented at Learning Meaningful Representations of Life Workshop at the 34th Conference on Neural Information Processing Systems, NeurIPS 2020, December 6, 2020, Online, Vancouver, Canada
: Sicking, Joachim; Pintz, Maximilian; Akila, Maram; Wirtz, Tim

Volltext urn:nbn:de:0011-n-6347656 (1.1 MByte PDF)
MD5 Fingerprint: 2e287ecba76144adff7348b889869472
Erstellt am: 19.5.2021

2020, 14 S.
Learning Meaningful Representations of Life Workshop <2020, Online>
Conference on Neural Information Processing Systems (NeurIPS) <34, 2020, Online>
Bundesministerium für Bildung und Forschung BMBF (Deutschland)
01S18038B; ML2R
Machine Learning Rhein-Ruhr
Konferenzbeitrag, Elektronische Publikation
Fraunhofer IAIS ()

We propose DenseHMM – a modification of Hidden Markov Models (HMMs) that allows to learn dense representations of both the hidden states and the observables. Compared to the standard HMM, transition probabilities are not atomic but composed of these representations via kernelization. Our approach enables constraint-free and gradient-based optimization. We propose two optimization schemes that make use of this: a modification of the Baum-Welch algorithm and a direct co-occurrence optimization. The latter one is highly scalable and comes empirically without loss of performance compared to standard HMMs. We show that the non-linearity of the kernelization is crucial for the expressiveness of the representations. The properties of the DenseHMM like learned co-occurrences and log-likelihoods are studied empirically on synthetic and biomedical datasets.