Los puntos clave no están disponibles para este artículo en este momento.
In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.
Building similarity graph...
Analyzing shared references across papers
Loading...
Kyunghyun Cho
Bart van Merriënboer
Çaǧlar Gülçehre
Université de Montréal
Canadian Institute for Advanced Research
Constructor University
Building similarity graph...
Analyzing shared references across papers
Loading...
Cho et al. (Tue,) studied this question.
www.synapsesocial.com/papers/696402a893519ba8671d0489 — DOI: https://doi.org/10.48550/arxiv.1406.1078
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: