AI 從頭學(三八):Highlight
2017/09/05
1. 最基本的
1.1 CNN
1.2 RNN
1.3 Overview
1.4 ML
2. CNN 的應用
2.1 R-CNN
2.2 GAN
3. Reinforcement Learning 強化學習
3.1 Game
3.2 Go
4. RNN的應用
4.1 NTM
5. 正規化與最佳化
5.1 Regularization
5.2 Optimization
6. 自編碼器與受限玻爾茲曼機
6.1 AE
6.2 RBM
7. 各種應用
7.1 Robotics
7.2 Fintech
-----
-----
1.1 CNN
◎ LeNet。第一個「成功」應用於 MNIST 手寫字元集的 CNN。
LeCun, Yann, et al. "Gradient-based learning applied to document recognition." Proceedings of the IEEE 86.11 (1998): 2278-2324.
LeCun, Yann, et al. "Gradient-based learning applied to document recognition." Proceedings of the IEEE 86.11 (1998): 2278-2324.
◎ AlexNet 跟 VGGNet 都引用的論文。第一篇使用 GPU 的 CNN?本篇加寬失敗,VGGNet 加深成功。
Ciresan, Dan C., et al. "Flexible, high performance convolutional neural networks for image classification." IJCAI Proceedings-International Joint Conference on Artificial Intelligence. Vol. 22. No. 1. 2011.
◎ AlexNet
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012.
◎ ZFNet。微調 AlexNet 的參數。特徵可視化。
Zeiler, Matthew D., and Rob Fergus. "Visualizing and understanding convolutional networks." European conference on computer vision. Springer, Cham, 2014.
◎ VGGNet。conv5 分解成兩個 conv3,然後不斷加深。
Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014).
◎ NIN。GoogLeNet 的靈感來源。1 x 1 convolution。
Lin, Min, Qiang Chen, and Shuicheng Yan. "Network in network." arXiv preprint arXiv:1312.4400 (2013).
◎ GoogLeNet 的架構,Inception 的靈感來源。
Arora, Sanjeev, et al. "Provable bounds for learning some deep representations." International Conference on Machine Learning. 2014.
◎ GoogLeNet。
Szegedy, Christian, et al. "Going deeper with convolutions." Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
◎ GoogLeNet 系列。Inception v2。BN。
Ioffe, Sergey, and
Christian Szegedy. "Batch normalization: Accelerating deep network
training by reducing internal covariate shift." International Conference on Machine Learning. 2015.
◎ GoogLeNet 系列。Inception v3。
Szegedy, Christian, et al. "Rethinking the inception architecture for computer vision." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.
◎ GoogLeNet 系列。Inception v4。結合 ResNet。
Szegedy, Christian, et al. "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning." AAAI. 2017.
◎ 太深了,梯度傳不下去,於是有了 highway。
Srivastava, Rupesh Kumar, Klaus Greff, and Jürgen Schmidhuber. "Highway networks." arXiv preprint arXiv:1505.00387 (2015).
◎ ResNet。繼承 VGGNet 的架構。
He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
◎ ResNet 的理論基礎。
He, Kaiming, et al. "Identity mappings in deep residual networks." European Conference on Computer Vision. Springer International Publishing, 2016.
◎ DenseNet。
Huang, Gao, et al. "Densely connected convolutional networks." arXiv preprint arXiv:1608.06993 (2016).
-----
1.2 RNN
◎ LSTM
Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term memory." Neural computation 9.8 (1997): 1735-1780.
-----
1.3 Overview
◎ 如果你懂 CNN 與 RNN 之後,這篇可以看一下。如果不懂,還是可以看,領略一下 Deep Learning 在應用面的「威力」。
LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." Nature 521.7553 (2015): 436-444.
◎ 這本「很棒」,也「有點難」。讀論文的時候,可以來參考相關的章節。
Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
◎ 偏「歷史性」的回顧。不過如果你不懂 CNN、RNN 等,會不知所云。
Schmidhuber, Jürgen. "Deep learning in neural networks: An overview." Neural networks 61 (2015): 85-117.
-----
1.4 ML
◎ Top 10
Wu, Xindong, et al. "Top 10 algorithms in data mining." Knowledge and information systems 14.1 (2008): 1-37.
◎ Hot
Chen, Tianqi, and Carlos Guestrin. "XGBoost: A scalable tree boosting system." Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. ACM, 2016.
----
2.1. R-CNN
◎ Pilot of R-CNN。
Szegedy, Christian, Alexander Toshev, and Dumitru Erhan. "Deep neural networks for object detection." Advances in neural information processing systems. 2013.
◎ R-CNN
Girshick, Ross, et al. "Rich feature hierarchies for accurate object detection and semantic segmentation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2014.
◎ Fast R-CNN。
Girshick, Ross. "Fast R-CNN." Proceedings of the IEEE international conference on computer vision. 2015.
◎ Faster R-CNN。
Ren, Shaoqing, et al. "Faster R-CNN: Towards real-time object detection with region proposal networks." Advances in neural information processing systems. 2015.
◎ YOLO。
Redmon, Joseph, et al. "You only look once: Unified, real-time object detection." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.
◎ SSD。
Liu, Wei, et al. "SSD: Single shot multibox detector." European conference on computer vision. Springer, Cham, 2016.
◎ Mask R-CNN。
He, Kaiming, et al. "Mask R-CNN." arXiv preprint arXiv:1703.06870 (2017).
-----
2.2 GAN
◎ GAN。
◎ Wasserstein GAN - 1。
◎ Wasserstein GAN - 2。
◎ Wasserstein GAN - 3。
-----
2.2 GAN
◎ GAN。
Goodfellow, Ian, et al. "Generative adversarial nets." Advances in neural information processing systems. 2014.
◎ Wasserstein GAN - 1。
Arjovsky, Martin, and Léon Bottou. "Towards principled methods for training generative adversarial networks." arXiv preprint arXiv:1701.04862 (2017).
◎ Wasserstein GAN - 2。
Arjovsky, Martin, Soumith Chintala, and Léon Bottou. "Wasserstein gan." arXiv preprint arXiv:1701.07875 (2017).
◎ Wasserstein GAN - 3。
Gulrajani, Ishaan, et al. "Improved training of wasserstein gans." arXiv preprint arXiv:1704.00028 (2017).
◎
◎
Wang, Kunfeng, et al. "Generative adversarial networks: introduction and outlook." IEEE/CAA Journal of Automatica Sinica 4.4 (2017): 588-598.
◎
How Generative Adversarial Nets and its variants Work: An Overview of GAN
https://arxiv.org/pdf/1711.05914.pdf
-----
3. Reinforcement Learning
◎ RL 的經典書。
Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. (2016): 424.
http://incompleteideas.net/sutton/book/bookdraft2016sep.pdf
-----
3.1 Game
◎ DQN
Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529-533.
◎ A3C
Mnih, Volodymyr, et al. "Asynchronous methods for deep reinforcement learning." International Conference on Machine Learning. 2016.
◎ UNREAL
Jaderberg, Max, et al. "Reinforcement learning with unsupervised auxiliary tasks." arXiv preprint arXiv:1611.05397 (2016).
-----
3.2 Go
◎ 1994
◎ 1996
◎ 2002
◎ 2004
◎ 2008
◎ 2014
◎ 2015
-----
4.1 NTM
◎ NTM
◎ DNC
◎
-----
5.1 Regularization
◎ Weight decay。
◎ Dropout。
◎ Batch Normalization。
-----
5.2 Optimization
◎ Overview of optimization
◎ 1 SGD
◎ 2 Momentum
◎ 3 NAG
◎ 4 AdaGrad
◎ 6 RMSProp
◎ 7 Adam
◎
Creswell, Antonia, et al. "Generative Adversarial Networks: An Overview." arXiv preprint arXiv:1710.07035 (2017).
◎
Wang, Kunfeng, et al. "Generative adversarial networks: introduction and outlook." IEEE/CAA Journal of Automatica Sinica 4.4 (2017): 588-598.
◎
How Generative Adversarial Nets and its variants Work: An Overview of GAN
https://arxiv.org/pdf/1711.05914.pdf
-----
3. Reinforcement Learning
◎ RL 的經典書。
Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. (2016): 424.
http://incompleteideas.net/sutton/book/bookdraft2016sep.pdf
-----
3.1 Game
◎ DQN
Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529-533.
◎ A3C
Mnih, Volodymyr, et al. "Asynchronous methods for deep reinforcement learning." International Conference on Machine Learning. 2016.
◎ UNREAL
Jaderberg, Max, et al. "Reinforcement learning with unsupervised auxiliary tasks." arXiv preprint arXiv:1611.05397 (2016).
-----
3.2 Go
◎ 1994
Schraudolph, Nicol N.,
Peter Dayan, and Terrence J. Sejnowski. "Temporal difference learning of
position evaluation in the game of Go." Advances in Neural Information Processing Systems. 1994.
◎ 1996
Enzenberger, Markus. "The integration of a priori knowledge into a Go playing neural network." URL: http://www. markus-enzenberger. de/neurogo. html (1996).
◎ 2002
Van Der Werf, Erik, et al. "Local move prediction in Go." International Conference on Computers and Games. Springer, Berlin, Heidelberg, 2002.
◎ 2004
Enzenberger, Markus. "Evaluation in Go by a neural network using soft segmentation." Advances in Computer Games. Springer US, 2004. 97-108.
◎ 2008
Sutskever, Ilya, and Vinod Nair. "Mimicking go experts with convolutional neural networks." Artificial Neural Networks-ICANN 2008 (2008): 101-110.
◎ 2014
Maddison, Chris J., et al. "Move evaluation in go using deep convolutional neural networks." arXiv preprint arXiv:1412.6564 (2014).
◎ 2015
Clark, Christopher, and Amos Storkey. "Training deep convolutional neural networks to play go." International Conference on Machine Learning. 2015.
◎ 2016 AlphaGo。
Silver, David, et al. "Mastering the game of Go with deep neural networks and tree search." Nature 529.7587 (2016): 484-489.
◎ 2017 AlphaGo Zero。
◎ 2017 AlphaGo Zero。
Silver, David, et al. "Mastering the game of Go without human knowledge." Nature 550.7676 (2017): 354-359.
-----
4.1 NTM
◎ NTM
Graves, Alex, Greg Wayne, and Ivo Danihelka. "Neural turing machines." arXiv preprint arXiv:1410.5401 (2014).
◎ DNC
Graves, Alex, et al. "Hybrid computing using a neural network with dynamic external memory." Nature 538.7626 (2016): 471-476.
◎
Olah, Chris, and Shan Carter. "Attention and augmented recurrent neural networks." Distill 1.9 (2016): e1.
-----
5.1 Regularization
◎ Weight decay。
Krogh, Anders, and John A. Hertz. "A simple weight decay can improve generalization." Advances in neural information processing systems. 1992.
◎ Dropout。
Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." Journal of machine learning research 15.1 (2014): 1929-1958.
◎ Batch Normalization。
Ioffe, Sergey, and
Christian Szegedy. "Batch normalization: Accelerating deep network
training by reducing internal covariate shift." International Conference on Machine Learning. 2015.
-----
5.2 Optimization
◎ Overview of optimization
Ruder, Sebastian. "An overview of gradient descent optimization algorithms." arXiv preprint arXiv:1609.04747 (2016).
◎ 1 SGD
Bottou, Léon. "Stochastic gradient descent tricks." Neural networks: Tricks of the trade. Springer Berlin Heidelberg, 2012. 421-436.
◎ 2 Momentum
Polyak, Boris T. "Some methods of speeding up the convergence of iteration methods." USSR Computational Mathematics and Mathematical Physics 4.5 (1964): 1-17.
◎ 3 NAG
Sutskever, Ilya, et al. "On the importance of initialization and momentum in deep learning." International conference on machine learning. 2013.
◎ 4 AdaGrad
Duchi, John, Elad Hazan, and Yoram Singer. "Adaptive subgradient methods for online learning and stochastic optimization." Journal of Machine Learning Research 12.Jul (2011): 2121-2159.
◎ 5 AdaADelta
Zeiler, Matthew D. "ADADELTA: an adaptive learning rate method." arXiv preprint arXiv:1212.5701 (2012).
◎ 6 RMSProp
Hinton, G., N. Srivastava, and K. Swersky. "RMSProp: Divide the gradient by a running average of its recent magnitude." Neural networks for machine learning, Coursera lecture 6e (2012).
◎ 7 Adam
Kingma, Diederik, and Jimmy Ba. "Adam: A method for stochastic optimization." arXiv preprint arXiv:1412.6980 (2014).
-----
6.1 AE
◎
◎
◎
◎
-----
6.2 RBM
◎ 這篇把 RBM 的物理意義講得很清楚 浅析 Hinton 最近提出的 Capsule 计划 - 知乎专栏
https://zhuanlan.zhihu.com/p/29435406
◎
◎
◎
-----
7.1 Robotics
◎ 機器人論文,精選裡面的精選
-----
7.2 Fintech
◎ Machine Learning
◎ ANN
◎ Deep Learning
-----
6.1 AE
◎
Hinton, Geoffrey E., and Richard S. Zemel. "Autoencoders, minimum description length and Helmholtz free energy." Advances in neural information processing systems. 1994.
◎
Bengio, Yoshua, et al. "Greedy layer-wise training of deep networks." Advances in neural information processing systems. 2007.
◎
Poultney, Christopher, Sumit Chopra,
and Yann L. Cun. "Efficient learning of sparse representations with an
energy-based model." Advances in neural information processing systems. 2007.
◎
Vincent, Pascal, et al. "Extracting and composing robust features with denoising autoencoders." Proceedings of the 25th international conference on Machine learning. ACM, 2008.
-----
6.2 RBM
◎ 這篇把 RBM 的物理意義講得很清楚 浅析 Hinton 最近提出的 Capsule 计划 - 知乎专栏
https://zhuanlan.zhihu.com/p/29435406
◎
Hinton, Geoffrey E., Simon Osindero, and Yee-Whye Teh. "A fast learning algorithm for deep belief nets." Neural computation 18.7 (2006): 1527-1554.
◎
Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. "Reducing the dimensionality of data with neural networks." science 313.5786 (2006): 504-507.
Salakhutdinov, Ruslan, and Geoffrey Hinton. "Deep boltzmann machines." Artificial Intelligence and Statistics. 2009.
-----
7.1 Robotics
◎ 機器人論文,精選裡面的精選
Levine, Sergey, et al. "End-to-end training of deep visuomotor policies." Journal of Machine Learning Research 17.39 (2016): 1-40.
-----
7.2 Fintech
◎ Machine Learning
Cavalcante, Rodolfo C., et al. "Computational intelligence and financial markets: A survey and future directions." Expert Systems with Applications 55 (2016): 194-211.
◎ ANN
Tkáč, Michal, and Robert Verner. "Artificial neural networks in business: Two decades of research." Applied Soft Computing 38 (2016): 788-804.
◎ Deep Learning
Chong, Eunsuk, Chulwoo
Han, and Frank C. Park. "Deep learning networks for stock market
analysis and prediction: Methodology, data representations, and case
studies." Expert Systems with Applications 83 (2017): 187-205.
◎
◎
Li, Yawen, et al. "On neural networks and learning systems for business computing." Neurocomputing (2017).
No comments:
Post a Comment