What's the main points of ConvS2S?
2020/10/27
-----
-----
一、Introduction
本論文要解決(它之前研究)的哪些問題(弱點)?
-----
# GNMT
-----
# GNMT
-----
# PreConvS2S
-----
二、Method
-----
三、Result
-----
四、Discussion
-----
五、Conclusion and Future Work
-----
Conclusion
-----
Future Work
-----
References
◎ 主要論文
[1] LSTM。被引用 39743 次。
Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term memory." Neural computation 9.8 (1997): 1735-1780.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.676.4320&rep=rep1&type=pdf
[2] Seq2seq。被引用 12676 次。
Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. "Sequence to sequence learning with neural networks." Advances in neural information processing systems. 2014.
http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf
[3] Attention 1。被引用 14895 次。
Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. "Neural machine translation by jointly learning to align and translate." arXiv preprint arXiv:1409.0473 (2014).
https://arxiv.org/pdf/1409.0473.pdf
[4] ConvS2S。被引用 1772 次。
Gehring, Jonas, et al. "Convolutional sequence to sequence learning." arXiv preprint arXiv:1705.03122 (2017).
https://arxiv.org/pdf/1705.03122.pdf
[5] Transformer。被引用 13554 次。
Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information processing systems. 2017.
https://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf
-----
◎ 相關論文
-----
[6] GNMT。被引用 3391 次。
Wu, Yonghui, et al. "Google's neural machine translation system: Bridging the gap between human and machine translation." arXiv preprint arXiv:1609.08144 (2016).
https://arxiv.org/pdf/1609.08144.pdf
[7] PreConvS2S。被引用 273 次。
Gehring, Jonas, et al. "A convolutional encoder model for neural machine translation." arXiv preprint arXiv:1611.02344 (2016).
https://arxiv.org/pdf/1611.02344.pdf
-----
◎ 參考文章
The Star Also Rises: NLP(四):ConvS2S
https://hemingwang.blogspot.com/2019/04/convs2s.html
-----
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.