Title: On continuous space word representations as input of LSTM language model
Authors: Soutner, Daniel
Müller, Luděk
Citation: SOUTNER, Daniel; MÜLLER, Luděk. On continuous space word representations as input of LSTM language model. In: Statistical Language and Speech Processing. Berlin: Springer, 2015, p. 267-274. (Lectures notes in computer science; 9449). ISBN 978-3-319-25788-4.
Issue Date: 2015
Publisher: Springer
Document type: článek
article
URI: http://hdl.handle.net/11025/26011
ISBN: 978-3-319-25788-4
ISSN: 0302-9743
Keywords: umělé neuronové sítě;modelování;kontinuální reprezentace slov
Keywords in different language: artificial neural networks;modeling;continuous representations of words
Abstract in different language: Artificial neural networks have become the state-of-the-art in the task of language modelling whereas Long-Short Term Memory (LSTM) networks seem to be an efficient architecture. The continuous skip-gram and the continuous bag of words (CBOW) are algorithms for learning quality distributed vector representations that are able to capture a large number of syntactic and semantic word relationships. In this paper, we carried out experiments with a combination of these powerful models: the continuous representations of words trained with skip-gram/CBOW/GloVe method, word cache expressed as a vector using latent Dirichlet allocation (LDA). These all are used on the input of LSTM network instead of 1-of-N coding traditionally used in language models. The proposed models are tested on Penn Treebank and MALACH corpus.
Rights: © Springer
Appears in Collections:Články / Articles (KKY)

Files in This Item:
File Description SizeFormat 
Soutner.pdfPlný text299,81 kBAdobe PDFView/Open


Please use this identifier to cite or link to this item: http://hdl.handle.net/11025/26011

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.