Sunday, May 16, 2021

ResNet(二):Overview

ResNet (二):Overview

2020/12/23

-----


https://pixabay.com/zh/photos/sisters-summer-child-girls-931151/

-----

◎ Abstract

-----

◎ Introduction

-----

本論文要解決(它之前研究)的(哪些)問題(弱點)? 

-----


# VGGNet。

說明:

沒有恆等映射的網路。

-----


Figure 3: GoogLeNet network with all the bells and whistles.

# GoogLeNet

說明:

有恆等映射的代替物(輔助的輸出層)。某方面,可以說 ResNet 是 VGGNet 與 GoogLeNet 的整合。

-----

◎ Method

-----

解決方法? 

-----


# ResNet v1。

說明:

加上恆等映射後,只要訓練殘差。

-----


# ResNet v2。

說明:

更完整的恆等映射。

-----

具體細節?

https://hemingwang.blogspot.com/2021/03/resnetillustrated.html

-----

◎ Result

-----

本論文成果。 

-----

◎ Discussion

-----

本論文與其他論文(成果或方法)的比較。 

-----

成果比較。 

-----

方法比較。 

-----

◎ Conclusion 

-----

◎ Future Work

-----

後續相關領域的研究。 

-----


Figure 1: Schematics of network architectures.

# NDENet

說明:

https://www.jiqizhixin.com/articles/2019-05-17-7

-----


Table 1: In this table, we list a few popular deep networks, their associated ODEs and the numerical schemes that are connected to the architecture of the networks.

表1:在此表中,我們列出了一些流行的深度網路,與之關聯的 ODE 以及與網路架構連接的數值方案。

# NDENet

說明:

https://www.jiqizhixin.com/articles/2019-05-17-7

-----

後續延伸領域的研究。

-----


# Transformer。

說明:

Transformer 有使用恆等映射。

-----

◎ References

-----

# VGGNet。被引用 47721 次。以兩個 conv3 組成一個 conv5,反覆加深網路至 16 與 19 層。

Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014).

https://arxiv.org/pdf/1409.1556.pdf


# GoogLeNet

Szegedy, Christian, et al. "Going deeper with convolutions." Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.

https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdf


# ResNet v1。被引用 61600 次。加上靈感來自 LSTM 的 identity mapping,網路可到百層。

He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

https://openaccess.thecvf.com/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf


# ResNet v2。被引用 4560 次。重點從 residual block 轉移到 pure identity mapping,網路可到千層。

He, Kaiming, et al. "Identity mappings in deep residual networks." European conference on computer vision. Springer, Cham, 2016.

https://arxiv.org/pdf/1603.05027.pdf


# NDENet

Lu, Yiping, et al. "Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations." International Conference on Machine Learning. PMLR, 2018.

https://arxiv.org/pdf/1710.10121.pdf

http://proceedings.mlr.press/v80/lu18d/lu18d.pdf


# Transformer

Vaswani, Ashish, et al. "Attention is all you need." Advances in Neural Information Processing Systems. 2017.

https://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf

-----

No comments: