Sunday, October 17, 2021

Attention(二):Overview

Attention(二):Overview

2020/12/26

-----


https://pixabay.com/zh/photos/street-sign-note-direction-possible-141396/

-----

◎ Abstract

-----

◎ Introduction

-----

本論文要解決(它之前研究)的(哪些)問題(弱點)? 

-----


# Seq2seq 1。

-----

◎ Method

-----

解決方法? 

-----


# Attention 1。

-----

具體細節?

-----

◎ Result

-----

本論文成果。 

-----

◎ Discussion

-----

本論文與其他論文(成果或方法)的比較。 

-----

成果比較。 

-----

方法比較。 

-----

◎ Conclusion 

-----

◎ Future Work

-----

後續相關領域的研究。 

-----

後續延伸領域的研究。

-----

◎ References

-----

# Attention 1。被引用 14895 次。

Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. "Neural machine translation by jointly learning to align and translate." arXiv preprint arXiv:1409.0473 (2014).

https://arxiv.org/pdf/1409.0473.pdf


# Visual Attention。被引用 6060 次。

Xu, Kelvin, et al. "Show, attend and tell: Neural image caption generation with visual attention." International conference on machine learning. 2015.

http://proceedings.mlr.press/v37/xuc15.pdf


# Attention 2。被引用 4781 次。

Luong, Minh-Thang, Hieu Pham, and Christopher D. Manning. "Effective approaches to attention-based neural machine translation." arXiv preprint arXiv:1508.04025 (2015).

https://arxiv.org/pdf/1508.04025.pdf


# Short Attention。被引用 76 次。

Daniluk, Michał, et al. "Frustratingly short attention spans in neural language modeling." arXiv preprint arXiv:1702.04521 (2017).

https://arxiv.org/pdf/1702.04521.pdf

-----

Attention and Augmented Recurrent Neural Networks

https://distill.pub/2016/augmented-rnns/

-----

No comments: