Sunday, May 16, 2021

ResNet(一):Paper Translation

 ResNet(一):Paper Translation

2021/03/26

-----

限於時間因素,後續不再翻譯論文。

-----



https://pixabay.com/zh/photos/italy-rome-coliseum-colosseum-2478808/

-----

Deep Residual Learning for Image Recognition

深度殘差學習以進行圖像識別

Abstract

Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8×deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the  imageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.

較深的神經網路較難訓練。 我們提出了一個殘差的學習框架,以簡化比以前使用的網路更深的網路訓練。我們直接了當地將層重新配置為參考層輸入學習剩餘函數,而不是學習未參考函數。我們提供了全面的經驗證據,表明這些殘差網路更易於優化,並且可以通過大大增加的深度來獲得準確性。在 ImageNet 資料集上,我們評估深度最大為 152 層的殘差網路-比 VGG 網路 [40] 深 8 倍,但複雜度仍然較低。這些殘差網路的集成在 imageNet 測試集上達到 3.57% 的誤差。該結果在 ILSVRC 2015 分類任務中獲得第一名。 我們還介紹了具有 100 和 1000 層的 CIFAR-10 的分析。

The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

表現的深度對於許多視覺識別任務至關重要。 僅由於我們的深度表現,我們在 COCO 物件偵測資料集上獲得了 28% 的相對改進。深度殘差網路是我們提交 ILSVRC 和 COCO 2015 競賽1 的基礎,在該競賽中,我們還獲得了 ImageNet 偵測,ImageNet 位置,COCO 偵測和 COCO 分割等任務的第一名。

-----

1. Introduction

1. 介紹

Deep convolutional neural networks [22, 21] have led to a series of breakthroughs for image classification [21, 49, 39].  Deep networks naturally integrate low/mid/highlevel features [49] and classifiers in an end-to-end multilayer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth).  Recent evidence [40, 43] reveals that network depth is of crucial importance, and the leading results [40, 43, 12, 16] on the challenging ImageNet dataset [35] all exploit “very deep” [40] models, with a depth of sixteen [40] to thirty [16]. Many other nontrivial visual recognition tasks [7, 11, 6, 32, 27] have also greatly benefited from very deep models.

深度卷積神經網路 [22,21] 帶動了圖像分類的一系列突破 [21,49,39]。 深度網路自然地以端到端的多層方式集成了低/中/高級功能 [49] 和分類器,並且功能的“級別”可以通過堆疊的層數(深度)來豐富。 最近的證據 [40,43] 揭示了網路深度至關重要,在具有挑戰性的 ImageNet 資料集 [35] 上的領先結果 [40,43,12,16] 都利用了“非常深”的模型 [40], 深度為十六 [40] 到三十 [16]。 許多其他不壞的視覺識別任務 [7、11、6、32、27] 也從非常深的模型中受益匪淺。

Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers?  An obstacle to answering this question was the notorious problem of vanishing/exploding gradients [14, 1, 8], which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization [23, 8, 36, 12] and intermediate normalization layers [16], which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation [22].

在深度這個特徵的驅動下,出現了一個問題:學習更好的網路是否像堆疊更多的層一樣容易? 一個反面的例子是惡名昭彰的梯度消失 / 爆炸 [14,1,8],從一開始就阻礙了收斂。但是,此問題已通過歸一化初始化 [23、8、36、12] 和中間歸一化層 [16] 得到了很大解決,這使具有數十個層的網路能夠通過反向傳播開始收斂用於隨機梯度下降(SGD)[22]。

When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in [10, 41] and thoroughly verified by our experiments. Fig. 1 shows a typical example.

當更深層的網路開始收斂時,就會出現退化問題:隨著網路深度的增加,精度達到飽和(這可能不足為奇),然後迅速退化。出乎意料的是,這種退化不是由過擬合引起的,並且在 [10,41] 中報告並由我們的實驗完全驗證了,將更多的層添加到適當深度的模型中會導致更高的訓練誤差。 圖1 顯示了一個典型範例。

The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time).

訓練準確性的下降表明並非所有系統都同樣容易優化。 讓我們考慮一個較淺的架構,以及在其上添加了更多層的版本。通過構建更深的模型,可以找到一種解決方案:添加的層是恆等映射,其他層是從學習到的淺層模型中複製的。這種構造的解決方案表明,較深的模型與其較淺的版本相比,不會產生更高的訓練誤差。但是實驗表明,我們現有的求解器無法找到跟構造的解決方案一樣好或更好的解決方案(或無法在可行的時間內找到解決方案)。

In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as H(x), we let the stacked nonlinear layers fit another mapping of F(x) := H(x)−x. The original mapping is recast into F(x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.

在本論文中,我們通過引入深度殘差學習框架來解決退化問題。比起希望讓堆疊層符合預期的基本映射,我們直接讓這些層符合殘差映射。形式上,將所需的基礎映射表示為 H(x),我們讓堆疊的非線性層擬合另一個映射 F(x):= H(x)- x 。原始映射將重設為 F(x)+  x。 我們假設優化殘差映射比優化原始未改動的映射要容易。最極端的例子是,如果要優化恆等映射,那麼把殘差推到零,也比把原來一疊非線性層擬合成恆等映射要容易。

The formulation of F(x)+x can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections [2, 33, 48] are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe [19]) without modifying the solvers.

F(x)+ x 的公式可通過具有“快捷連接”的前饋神經網絡來實現(圖2)。 快捷連接 [2、33、48] 是跳過一層或多層的連接。在我們的例子中,快捷方式連接僅執行恆等映射,並將它們的輸出添加到堆疊層的輸出中(圖2)。恆等快捷方式連接既不會增加額外的參數,也不會增加計算複雜性。整個網路仍然可以通過 SGD 反向傳播進行端到端訓練,並且可以使用通用庫(例如 Caffe [19])輕鬆實現,而無需修改求解器。

We present comprehensive experiments on ImageNet [35] to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks.

我們在 ImageNet [35] 上進行了全面的實驗,以顯示退化問題並評估我們的方法。我們證明:1)我們極深的殘差網路很容易優化,但是當深度增加時,對應的“普通”網絡(簡單地堆疊層)顯示出更高的訓練誤差; 2)我們的深層殘差網路可以通過大大增加深度來輕鬆享受準確性的提高,從而產生比以前的網路更好的結果。

Similar phenomena are also shown on the CIFAR-10 set [20], suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers.

在CIFAR-10 上也顯示了類似現象 [20],這表明優化困難和我們方法的效果不僅限定於特定資料集。 我們在此資料集上展示了經過成功訓練的 100 層以上的模型,並探索了 1000 層以上的模型。

On the ImageNet classification dataset [35], we obtain excellent results by extremely deep residual nets. Our 152- layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets [40]. Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.

在 ImageNet 分類資料集 [35] 上,我們通過極深的殘差網獲得了出色的結果。 我們的 152 層殘差網路是 ImageNet 上提出的最深的網路,同時其複雜度仍低於 VGG 網路 [40]。我們的集成在 ImageNet 測試集上的 top-5 的錯誤率為 3.57%,並在 2015 年 ILSVRC 分類比賽中獲得第一名。極深的表示形式在其他識別任務上也具有出色的泛化性能,使我們在 ILSVRC 和 COCO 2015 競賽中進一步贏得了第一名:ImageNet 5偵測,ImageNet 定位,COCO 偵測和 COCO 分割。 有力的證據表明,殘差學習原理是通用的,我們希望它適用於其他視覺和非視覺問題。

-----

2. Related Work

2.相關工作

Residual Representations. In image recognition, VLAD [18] is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector [30] can be formulated as a probabilistic version [18] of VLAD. Both of them are powerful shallow representations for image retrieval and classification [4, 47]. For vector quantization, encoding residual vectors [17] is shown to be more effective than encoding original vectors.

殘差表示。 在影像識別中,VLAD  [18] 是通過相對於字典的殘差向量進行編碼的表示,Fisher Vector [30] 可公式化為 VLAD 的機率版本[18]。兩者都是用於圖像檢索和分類的有力的淺層表示 [4,47]。對於向量量化,編碼殘差向量 [17] 比編碼原始向量更有效。

In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method [3] reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning [44, 45], which relies on variables that represent residual vectors between two scales. It has been shown [3, 44, 45] that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization.

在低階視覺和計算機圖學中,為了求解偏微分方程(PDE),廣泛使用的 Multigrid 方法 [3] 將系統重新形成為多個尺度的子問題,其中每個子問題負責在較粗和較細的規模之間進行殘差求解。Multigrid 的替代方法是分層基礎預處理 [44,45],它依賴於表示兩個標度之間的殘差向量的變量。 已經證明 [ 3,44,45] 這些求解器的收斂速度比不知道解答殘差性質的標準求解器快得多。這些方法表明,良好的重構或預處理可以簡化優化過程。

Shortcut Connections. Practices and theories that lead to shortcut connections [2, 33, 48] have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output [33, 48]. In [43, 24], a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of [38, 37, 31, 46] propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In [43], an “inception” layer is composed of a shortcut branch and a few deeper branches.

快捷連接。 導致快捷連接 [ 2、33、48] 的實踐和理論已經研究了很長時間。訓練多層感知器(MLP)的早期實踐是添加從網路輸入連接到輸出的線性層 [33,48]。 在 [43,24] 中,一些中間層直接連接到輔助分類器,以解決消失/爆炸梯度。[38,37,31,46] 的論文提出了通過快捷連接實現居中層響應,梯度和傳播誤差居中的方法。在 [43] 中,“Inception”層由快捷分支和一些更深的分支組成。

Concurrent with our work, “highway networks” [41, 42] present shortcut connections with gating functions [15]. These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, high way networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).

與我們的工作同時,“公路網路”  [41、42] 提供了具有閘道功能 [15] 的快捷連接。與我們的不帶參數的恆等快捷方式相反,這些閘道取決於資料並具有參數。當封閉的快捷方式“關閉”(接近零)時,公路網路中的圖層表示非殘差功能。相反,我們的公式總是學習殘差函數。 我們的恆等快捷連接永遠不會關閉,所有資訊始終都會傳遞,並會學習增加的殘差功能。另外,高速公路網路還沒有顯示出深度大大增加(例如超過100層)的準確性。

-----

3. Deep Residual Learning

3.1. Residual Learning

3. 深度殘差學習

3.1. 殘差學習

Let us consider H(x) as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with x denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions, then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., H(x) − x (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate H(x), we explicitly let these layers approximate a residual function F(x) := H(x) − x. The original function thus becomes F(x)+x. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different.

讓我們將 H(x)視為由一些堆疊層(不一定是整個網路)擬合的基礎映射,其中 x 表示這些層中第一層的輸入。如果假設多個非線性層可以漸近地逼近複雜函數,則等效於假設它們可以漸近地近似殘差函數,即 H(x)- x(假設輸入和輸出的維數相同)。因此,我們沒有讓堆疊的層近似為 H(x),而是明確地讓這些層近似為殘差函數 F(x):= H(x)- x。 因此,原始函數變為F(x)+ x。儘管兩種形式都應該能夠漸近地逼近所需的函數(如假設的那樣),但學習的難易程度可能會有所不同。

This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings.

這種重新設計是由與退化問題有關的違反直覺的現象引起的(圖1,左)。正如我們在簡介中討論的那樣,如果可以將添加的層建造為恆等映射,則較深的模型應具有的訓練誤差不大於其較淺的模型的訓練誤差。退化問題表明,求解器可能難以通過多個非線性層來逼近恆等映射。通過殘差學習的重構,如果恆等映射是最佳的,則求解器可以簡單地將多個非線性層的權重逼近零以逼近恆等映射。

In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity mappings provide reasonable preconditioning.

在實際情況下,恆等映射不太可能是最佳的,但是我們的重新制定可能有助於解決問題。如果最優函數比零映射更接近於一個恆等式,則求解器應該參考恆等映射來查找擾動,而不是將其作為一個新的函數來學習。我們通過實驗(圖7)表明,所學習的殘差函數通常具有較小的響應,這表明恆等映射提供了合理的預處理。

3.2. Identity Mapping by Shortcuts

3.2. 通過快捷方式進行恆等映射

We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as:

我們對每幾個堆疊的層採用殘差學習。 構建塊如圖2 所示。在形式上,在本文中,我們考慮一個構建塊,定義為:

Here x and y are the input and output vectors of the layers considered. The function F(x, {Wi}) represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, F = W2σ(W1x) in which σ denotes ReLU [29] and the biases are omitted for simplifying notations. The operation F + x is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., σ(y), see Fig. 2).

-----

# ResNet v1

He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

https://openaccess.thecvf.com/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf

-----

No comments: