Sunday, November 07, 2021

ConvS2S(二):Overview

ConvS2S(二):Overview

2020/12/27

-----



https://pixabay.com/zh/photos/school-book-knowledge-study-1661731/

-----

◎ Abstract

-----

◎ Introduction

-----

本論文要解決(它之前研究)的(哪些)問題(弱點)? 

-----



# GNMT。

-----



# PreConvS2S。

-----

◎ Method

-----

解決方法? 

-----


# ConvS2S。

-----

具體細節?

-----

◎ Result

-----

本論文成果。 

-----

◎ Discussion

-----

本論文與其他論文(成果或方法)的比較。 

-----

成果比較。 

-----

方法比較。 

-----

◎ Conclusion 

-----

◎ Future Work

-----

後續相關領域的研究。 

-----

後續延伸領域的研究。

-----

◎ References

-----

# GNMT。被引用 3391 次。

Wu, Yonghui, et al. "Google's neural machine translation system: Bridging the gap between human and machine translation." arXiv preprint arXiv:1609.08144 (2016).

https://arxiv.org/pdf/1609.08144.pdf


# ConvS2S。被引用 1772 次。

Gehring, Jonas, et al. "Convolutional sequence to sequence learning." arXiv preprint arXiv:1705.03122 (2017).

https://arxiv.org/pdf/1705.03122.pdf


# ELMo。被引用 5229 次。ELMo 是 Context2vec 中,做的最好的。

Peters, Matthew E., et al. "Deep contextualized word representations." arXiv preprint arXiv:1802.05365 (2018).

https://arxiv.org/pdf/1802.05365.pdf


# Context2vec。被引用 312 次。

Melamud, Oren, Jacob Goldberger, and Ido Dagan. "context2vec: Learning generic context embedding with bidirectional lstm." Proceedings of the 20th SIGNLL conference on computational natural language learning. 2016.

https://www.aclweb.org/anthology/K16-1006.pdf

-----

◎ 相關論文

-----

PreConvS2S。被引用 273 次。

Gehring, Jonas, et al. "A convolutional encoder model for neural machine translation." arXiv preprint arXiv:1611.02344 (2016).

https://arxiv.org/pdf/1611.02344.pdf

-----

◎ 參考文章

The Star Also Rises: NLP(四):ConvS2S

https://hemingwang.blogspot.com/2019/04/convs2s.html

-----

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.