Thursday, December 05, 2019

RMSProp

RMSProp

2019/12/02

-----


// Overview of different Optimizers for neural networks

-----


// An Overview on Optimization Algorithms in Deep Learning 2 - Taihong Xiao

-----


# RMSProp

-----

References

# RMSProp
Tieleman, Tijmen, and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural networks for machine learning 4.2 (2012): 26-31.
http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf 

-----

Overview of different Optimizers for neural networks
https://medium.com/datadriveninvestor/overview-of-different-optimizers-for-neural-networks-e0ed119440c3

An Overview on Optimization Algorithms in Deep Learning 2 - Taihong Xiao
https://prinsphield.github.io/posts/2016/02/overview_opt_alg_deep_learning2/

-----

【优化算法】一文搞懂RMSProp优化算法 - 知乎
https://zhuanlan.zhihu.com/p/34230849

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.