Seq2seq
2020/12/04
-----
https://pixabay.com/zh/photos/artwork-hue-lighting-lamps-3719514/
-----
◎ Abstract
-----
◎ Introduction
-----
本論文要解決(它之前研究)的(哪些)問題(弱點)?
-----
# History DL。
說明:
在非整句讀完就開始輸出的情況下,有斷章取義的缺點。
-----
# RCTM。
說明:
Encoder 端採用一維卷積,但不像後續的 ConvS2S 有加入時間的資訊。
-----
◎ Method
-----
解決方法?
-----
# Seq2seq 1。
說明:
整句讀完再開始解碼。
-----
具體細節?
-----
◎ Result
-----
本論文成果。
-----
◎ Discussion
-----
本論文與其他論文(成果或方法)的比較。
-----
成果比較。
-----
方法比較。
-----
◎ Conclusion
-----
◎ Future Work
-----
後續相關領域的研究。
-----
後續延伸領域的研究。
-----
◎ References
-----
# RCTM。被引用 1137 次。
Kalchbrenner, Nal, and Phil Blunsom. "Recurrent continuous translation models." Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 2013.
https://www.aclweb.org/anthology/D13-1176.pdf
# Seq2seq 1。被引用 12676 次。
Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. "Sequence to sequence learning with neural networks." Advances in neural information processing systems. 2014.
http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf
# Seq2seq 2。被引用 11284 次。
Cho, Kyunghyun, et al. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014).
https://arxiv.org/pdf/1406.1078.pdf
# Paragraph2vec。被引用 6763 次。
Le, Quoc, and Tomas Mikolov. "Distributed representations of sentences and documents." International conference on machine learning. 2014.
http://proceedings.mlr.press/v32/le14.pdf
-----
# History DL。
Alom, Md Zahangir, et al. "The history began from alexnet: A comprehensive survey on deep learning approaches." arXiv preprint arXiv:1803.01164 (2018).
https://arxiv.org/ftp/arxiv/papers/1803/1803.01164.pdf
-----
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.