|Title:||On continuous space word representations as input of LSTM language model|
|Citation:||SOUTNER, Daniel; MÜLLER, Luděk. On continuous space word representations as input of LSTM language model. In: Statistical Language and Speech Processing. Berlin: Springer, 2015, p. 267-274. (Lectures notes in computer science; 9449). ISBN 978-3-319-25788-4.|
|Keywords:||umělé neuronové sítě;modelování;kontinuální reprezentace slov|
|Keywords in different language:||artificial neural networks;modeling;continuous representations of words|
|Abstract in different language:||Artificial neural networks have become the state-of-the-art in the task of language modelling whereas Long-Short Term Memory (LSTM) networks seem to be an efficient architecture. The continuous skip-gram and the continuous bag of words (CBOW) are algorithms for learning quality distributed vector representations that are able to capture a large number of syntactic and semantic word relationships. In this paper, we carried out experiments with a combination of these powerful models: the continuous representations of words trained with skip-gram/CBOW/GloVe method, word cache expressed as a vector using latent Dirichlet allocation (LDA). These all are used on the input of LSTM network instead of 1-of-N coding traditionally used in language models. The proposed models are tested on Penn Treebank and MALACH corpus.|
|Appears in Collections:||Články / Articles (KKY)|
Please use this identifier to cite or link to this item:
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.