https://medium.com/towards-data-science/deep-learning-for-object-detection-a-comprehensive-review-73930816d8d9

https://stackoverflow.com/questions/20027598/why-should-weights-of-neural-networks-be-initialized-to-random-numbers/40525812?noredirect=1#comment80759413_40525812

https://www.quora.com/If-one-initializes-a-set-of-weights-in-a-Neural-Network-to-zero-is-it-true-that-in-future-iterations-they-will-not-be-updated-by-gradient-descent-and-backpropagation

111

【直观详解】什么是正则化

https://charlesliuyx.github.io/2017/10/03/%E3%80%90%E7%9B%B4%E8%A7%82%E8%AF%A6%E8%A7%A3%E3%80%91%E4%BB%80%E4%B9%88%E6%98%AF%E6%AD%A3%E5%88%99%E5%8C%96/

李宏毅 / 一天搞懂深度學習

https://www.slideshare.net/tw_dsconf/ss-62245351?qid=108adce3-2c3d-4758-a830-95d0a57e46bc&v=&b=&from_search=3

gradient descent

http://www.deeplearningbook.org/contents/numerical.html

http://cs231n.github.io/neural-networks-3/

https://arxiv.org/pdf/1609.04747.pdf

http://www.deeplearningbook.org/contents/optimization.html

https://www.analyticsvidhya.com/blog/2017/03/introduction-to-gradient-descent-algorithm-along-its-variants/

https://www.quora.com/Is-a-single-layered-ReLu-network-still-a-universal-approximator/answer/Conner-Davis-2

https://www.analyticsvidhya.com/blog/2017/04/comparison-between-deep-learning-machine-learning/

https://www.analyticsvidhya.com/blog/2017/05/25-must-know-terms-concepts-for-beginners-in-deep-learning/

softmax

https://www.quora.com/What-is-the-intuition-behind-SoftMax-function/answer/Sebastian-Raschka-1

https://blog.manash.me/implementing-l2-constrained-softmax-loss-function-on-a-convolutional-neural-network-using-1bb7c0aab7b1

https://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/

http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/

https://www.quora.com/What-is-the-role-of-the-activation-function-in-a-neural-network-How-does-this-function-in-a-human-neural-network-system

Important

http://www.cs.toronto.edu/~fleet/courses/cifarSchool09/slidesBengio.pdf

https://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf

https://medium.com/@vivek.yadav/how-neural-networks-learn-nonlinear-functions-and-classify-linearly-non-separable-data-22328e7e5be1

如何通俗易懂地解释卷积?

https://www.zhihu.com/question/22298352?rf=21686447

卷积神经网络工作原理直观的解释?

https://www.zhihu.com/question/39022858

https://mlnotebook.github.io/post/

https://zhuanlan.zhihu.com/p/28478034

http://timdettmers.com/2015/03/26/convolution-deep-learning/

https://stats.stackexchange.com/questions/116362/what-does-the-convolution-step-in-a-convolutional-neural-network-do

https://www.quora.com/Why-does-deep-learning-architectures-only-use-the-non-linear-activation-function-in-the-hidden-layers

https://www.quora.com/Is-a-single-layered-ReLu-network-still-a-universal-approximator/answer/Conner-Davis-2

https://www.quora.com/Is-ReLU-a-linear-piece-wise-linear-or-non-linear-activation-function

=========

transfer learning

https://www.quora.com/Why-is-deep-learning-so-easy

===============

https://www.quora.com/How-can-I-learn-Deep-Learning-quickly

What is a simple explanation of how artificial neural networks work?

How can I learn Deep Learning quickly?

https://www.quora.com/How-can-I-learn-Deep-Learning-quickly

https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw

https://www.quora.com/Why-do-neural-networks-need-more-than-one-hidden-layer

bengioy

https://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf

https://www.iro.umontreal.ca/~lisa/pointeurs/TR1312.pdf

http://videolectures.net/deeplearning2015_bengio_theoretical_motivations/

http://www.cs.toronto.edu/~fleet/courses/cifarSchool09/slidesBengio.pdf

https://stats.stackexchange.com/questions/182734/what-is-the-difference-between-a-neural-network-and-a-deep-neural-network?rq=1

Universal Approximation Theorem

https://pdfs.semanticscholar.org/f22f/6972e66bdd2e769fa64b0df0a13063c0c101.pdf

http://www.cs.cmu.edu/~epxing/Class/10715/reading/Kornick_et_al.pdf

「Deep Learning」读书系列分享第四章:数值计算 | 分享总结

Nonlinear Classifiers

https://www.quora.com/In-deep-learning-can-good-results-be-obtained-when-you-use-a-linear-function-in-between-the-hidden-layers

https://www.quora.com/Why-do-neural-networks-need-an-activation-function

https://stackoverflow.com/questions/9782071/why-must-a-nonlinear-activation-function-be-used-in-a-backpropagation-neural-net

http://ai.stanford.edu/~quocle/tutorial1.pdf

http://cs231n.github.io/neural-networks-1/

https://www.quora.com/Why-does-deep-learning-architectures-only-use-the-non-linear-activation-function-in-the-hidden-layers

https://medium.com/@vivek.yadav/how-neural-networks-learn-nonlinear-functions-and-classify-linearly-non-separable-data-22328e7e5be1

https://www.quora.com/What-is-the-ability-of-a-single-neuron-with-a-non-linear-activation-function-Can-it-only-classify-the-input-space-in-two-classes

NN,CNN

https://www.analyticsvidhya.com/blog/2017/04/comparison-between-deep-learning-machine-learning/

https://www.analyticsvidhya.com/blog/2017/06/architecture-of-convolutional-neural-networks-simplified-demystified/

[CV] 通俗理解『卷积』——从傅里叶变换到滤波器

https://zhuanlan.zhihu.com/p/28478034

如何通俗易懂地解释卷积?

https://www.zhihu.com/question/22298352?rf=21686447

http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf

https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/

https://mlnotebook.github.io/post/CNN1/

http://bamos.github.io/2016/08/09/deep-completion/

https://www.analyticsvidhya.com/blog/2016/04/deep-learning-computer-vision-introduction-convolution-neural-networks/

https://www.analyticsvidhya.com/blog/2016/03/introduction-deep-learning-fundamentals-neural-networks/

https://www.analyticsvidhya.com/blog/2017/05/gpus-necessary-for-deep-learning/

Applied Deep Learning - Part 1: Artificial Neural Networks

https://medium.com/towards-data-science/applied-deep-learning-part-1-artificial-neural-networks-d7834f67a4f6

Papaer

dropout ----Hinton

https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf

Neural Network with Unbounded Activation Functions is Universal Approximator

https://arxiv.org/pdf/1505.03654.pdf

Transfer Learning

Paper by Yoshua Bengio (another deep learning pioneer).
Paper by Ali Sharif Razavian.
Paper by Jeff Donahue.
Paper and subsequent paper by Dario Garcia-Gasulla.

overfitting

https://medium.com/towards-data-science/deep-learning-overfitting-846bf5b35e24

名校课程

cs231

http://www.jianshu.com/p/182baeb82c71

https://www.coursera.org/learn/neural-networks

收费视频

https://www.udemy.com/deeplearning/?siteID=mDjthAvMbf0-ZE2EvHFczLauDLzv0OQAKg&LSNPUBID=mDjthAvMbf0

Paper

The Power of Depth for Feedforward Neural Networks

https://arxiv.org/pdf/1512.03965.pdf?platform=hootsuite

Deep Residual Learning for Image Recognition

https://arxiv.org/pdf/1512.03385v1.pdf

Speed/accuracy trade-offs for modern convolutional object detectors

https://arxiv.org/pdf/1611.10012.pdf

Playing Atari with Deep Reinforcement Learning

https://arxiv.org/pdf/1312.5602v1.pdf

Neural Network with Unbounded Activation Functions is Universal Approximator

https://arxiv.org/pdf/1505.03654.pdf

Transfer learning

https://databricks.com/blog/2017/06/06/databricks-vision-simplify-large-scale-deep-learning.html

TensorFlow Object Detection API

https://github.com/tensorflow/models/tree/477ed41e7e4e8a8443bc633846eb01e2182dc68a/object_detection

https://opensource.googleblog.com/2017/06/supercharge-your-computer-vision-models.html

Supercharge your Computer Vision models with the TensorFlow Object Detection API

https://research.googleblog.com/2017/06/supercharge-your-computer-vision-models.html

如何使用TensorFlow API构建视频物体识别系统

https://www.jiqizhixin.com/articles/2017-07-14-5

谷歌开放的TensorFlow Object Detection API 效果如何?对业界有什么影响?

https://www.zhihu.com/question/61173908

https://stackoverflow.com/questions/42364513/how-to-recognise-multiple-objects-in-the-same-image

利用TensorFlow Object Detection API 训练自己的数据集

https://zhuanlan.zhihu.com/p/27469690

谷歌开放的TensorFlow Object Detection API 效果如何?对业界有什么影响?

https://github.com/tensorflow/models/tree/master/research/object_detection/data

https://medium.com/towards-data-science/building-a-toy-detector-with-tensorflow-object-detection-api-63c0fdf2ac95

https://medium.com/towards-data-science/building-a-real-time-object-recognition-app-with-tensorflow-and-opencv-b7a2b4ebdc32

https://medium.com/towards-data-science/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9

https://stackoverflow.com/questions/44973184/train-tensorflow-object-detection-on-own-dataset

https://cloud.google.com/blog/big-data/2017/06/training-an-object-detector-using-cloud-machine-learning-engine

https://medium.com/ilenze-com/object-detection-using-deep-learning-for-advanced-users-part-1-183bbbb08b19

脑科学

https://www.quora.com/What-are-the-parts-of-the-neuron-and-their-function

why deep learning works的更多相关文章

  1. Why Deep Learning Works – Key Insights and Saddle Points

    Why Deep Learning Works – Key Insights and Saddle Points A quality discussion on the theoretical mot ...

  2. Decision Boundaries for Deep Learning and other Machine Learning classifiers

    Decision Boundaries for Deep Learning and other Machine Learning classifiers H2O, one of the leading ...

  3. Growing Pains for Deep Learning

    Growing Pains for Deep Learning Advances in theory and computer hardware have allowed neural network ...

  4. Why deep learning?

    1. 深度学习中网络越深越好么? 理论上说是这样的,因为网络越深,参数也越多,拟合能力也越强(但实际情况是,网络很深的时候,不容易训练,使得表现能力可能并不好). 2. 那么,不同什么深度的网络,在参 ...

  5. Use of Deep Learning in Modern Recommendation System: A Summary of Recent Works(笔记)

    注意:论文中,很多的地方出现baseline,可以理解为参照物的意思,但是在论文中,我们还是直接将它称之为基线,也 就是对照物,参照物. 这片论文中,作者没有去做实际的实验,但是却做了一件很有意义的事 ...

  6. (转) The major advancements in Deep Learning in 2016

    The major advancements in Deep Learning in 2016 Pablo Tue, Dec 6, 2016 in MACHINE LEARNING DEEP LEAR ...

  7. (转) Deep Learning Research Review Week 2: Reinforcement Learning

      Deep Learning Research Review Week 2: Reinforcement Learning 转载自: https://adeshpande3.github.io/ad ...

  8. deep learning 的综述

    从13年11月初开始接触DL,奈何boss忙or 各种问题,对DL理解没有CSDN大神 比如 zouxy09等 深刻,主要是自己觉得没啥进展,感觉荒废时日(丢脸啊,这么久....)开始开文,即为记录自 ...

  9. Deep Learning 16:用自编码器对数据进行降维_读论文“Reducing the Dimensionality of Data with Neural Networks”的笔记

    前言 论文“Reducing the Dimensionality of Data with Neural Networks”是深度学习鼻祖hinton于2006年发表于<SCIENCE > ...

随机推荐

  1. flume初识

    一.flume特点 flume是目前大数据领域数据采集的一个利器,当然除了flume还有Fluentd和logstash,其他的目前来说并没有深入的了解,但是我觉得flume能够在大数据繁荣的今天屹立 ...

  2. Manacher学习笔记

    目录 code(伪) Manacher算法 可在 \(O(n)\)的时间内求出一个字符串以每个位置为中心的最长回文子串. 原理:根据之前预处理出的回文串长度求得新的回文串长度 我们可以通过在字符中加上 ...

  3. UOJ.35.[模板]后缀排序(后缀数组 倍增)

    题目链接 论找到一个好的教程的正确性.. 后缀数组 下标从1编号: //299ms 2560kb #include <cstdio> #include <cstring> #i ...

  4. [CC-SEAPERM2]Sereja and Permutations

    [CC-SEAPERM2]Sereja and Permutations 题目大意: 有一个\(n(n\le300)\)排列\(p\),将其中一个元素\(p_i\)拿掉,然后将原来大于\(p_i\)的 ...

  5. [USACO18JAN]Cow at Large P

    Description: 贝茜被农民们逼进了一个偏僻的农场.农场可视为一棵有 \(N\) 个结点的树,结点分别编号为 \(1,2,\ldots, N\) .每个叶子结点都是出入口.开始时,每个出入口都 ...

  6. Java 反射 调用私有构造方法

    单例类: package singleton; public class SingletonTest { // 私有构造方法 private SingletonTest(){ System.out.p ...

  7. ionic2 隐藏滚动条

    方法 在全局样式,即app.scss里添加样式: ::-webkit-scrollbar { display: none !important; }

  8. web建包创建类

    1.创建一个人类名

  9. jQuery 学习05——AJAX:定义、load()方法、get()/post()方法

    AJAX = 异步 JavaScript 和 XML(Asynchronous JavaScript and XML). load() 方法:从服务器加载数据,并把返回的数据放入被选元素中. 语法:$ ...

  10. 获取gcc和clang的内置宏定义

    下面是对Gcc的内置宏定义的解释: https://gcc.gnu.org/onlinedocs/cpp/Common-Predefined-Macros.html https://github.co ...