Loading…
Nonlinear Transformations of Marginalisation Mappings for Kernels on Hidden Markov Models
Many problems in machine learning involve variable-size structured data, such as sets, sequences, trees, and graphs. Generative (i.e. model based) kernels are well suited for handling structured data since they are able to capture their underlying structure by allowing the inclusion of prior informa...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Many problems in machine learning involve variable-size structured data, such as sets, sequences, trees, and graphs. Generative (i.e. model based) kernels are well suited for handling structured data since they are able to capture their underlying structure by allowing the inclusion of prior information via specification of the source models. In this paper we focus on marginalisation kernels for variable length sequences generated by hidden Markov models. In particular, we propose a new class of generative embeddings, obtained through a nonlinear transformation of the original marginalisation mappings. This allows to embed the input data into a new feature space where a better separation can be achieved and leads to a new kernel defined as the inner product in the transformed feature space. Different nonlinear transformations are proposed and two different ways of applying these transformations to the original mappings are considered. The main contribution of this paper is the proof that the proposed nonlinear transformations increase the margin of the optimal hyper plane of an SVM classifier thus enhancing the classification performance. The proposed mappings are tested on two different sequence classification problems with really satisfying results that outperform state of the art methods. |
---|---|
DOI: | 10.1109/ICMLA.2011.106 |