https://medium.com/towards-data-science/deep-learning-for-object-detection-a-comprehensive-review-73930816d8d9

https://stackoverflow.com/questions/20027598/why-should-weights-of-neural-networks-be-initialized-to-random-numbers/40525812?noredirect=1#comment80759413_40525812

https://www.quora.com/If-one-initializes-a-set-of-weights-in-a-Neural-Network-to-zero-is-it-true-that-in-future-iterations-they-will-not-be-updated-by-gradient-descent-and-backpropagation

111

【直观详解】什么是正则化

https://charlesliuyx.github.io/2017/10/03/%E3%80%90%E7%9B%B4%E8%A7%82%E8%AF%A6%E8%A7%A3%E3%80%91%E4%BB%80%E4%B9%88%E6%98%AF%E6%AD%A3%E5%88%99%E5%8C%96/

李宏毅 / 一天搞懂深度學習

https://www.slideshare.net/tw_dsconf/ss-62245351?qid=108adce3-2c3d-4758-a830-95d0a57e46bc&v=&b=&from_search=3

gradient descent

http://www.deeplearningbook.org/contents/numerical.html

http://cs231n.github.io/neural-networks-3/

https://arxiv.org/pdf/1609.04747.pdf

http://www.deeplearningbook.org/contents/optimization.html

https://www.analyticsvidhya.com/blog/2017/03/introduction-to-gradient-descent-algorithm-along-its-variants/

https://www.quora.com/Is-a-single-layered-ReLu-network-still-a-universal-approximator/answer/Conner-Davis-2

https://www.analyticsvidhya.com/blog/2017/04/comparison-between-deep-learning-machine-learning/

https://www.analyticsvidhya.com/blog/2017/05/25-must-know-terms-concepts-for-beginners-in-deep-learning/

softmax

https://www.quora.com/What-is-the-intuition-behind-SoftMax-function/answer/Sebastian-Raschka-1

https://blog.manash.me/implementing-l2-constrained-softmax-loss-function-on-a-convolutional-neural-network-using-1bb7c0aab7b1

https://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/

http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/

https://www.quora.com/What-is-the-role-of-the-activation-function-in-a-neural-network-How-does-this-function-in-a-human-neural-network-system

Important

http://www.cs.toronto.edu/~fleet/courses/cifarSchool09/slidesBengio.pdf

https://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf

https://medium.com/@vivek.yadav/how-neural-networks-learn-nonlinear-functions-and-classify-linearly-non-separable-data-22328e7e5be1

如何通俗易懂地解释卷积?

https://www.zhihu.com/question/22298352?rf=21686447

卷积神经网络工作原理直观的解释?

https://www.zhihu.com/question/39022858

https://mlnotebook.github.io/post/

https://zhuanlan.zhihu.com/p/28478034

http://timdettmers.com/2015/03/26/convolution-deep-learning/

https://stats.stackexchange.com/questions/116362/what-does-the-convolution-step-in-a-convolutional-neural-network-do

https://www.quora.com/Why-does-deep-learning-architectures-only-use-the-non-linear-activation-function-in-the-hidden-layers

https://www.quora.com/Is-a-single-layered-ReLu-network-still-a-universal-approximator/answer/Conner-Davis-2

https://www.quora.com/Is-ReLU-a-linear-piece-wise-linear-or-non-linear-activation-function

=========

transfer learning

https://www.quora.com/Why-is-deep-learning-so-easy

===============

https://www.quora.com/How-can-I-learn-Deep-Learning-quickly

What is a simple explanation of how artificial neural networks work?

How can I learn Deep Learning quickly?

https://www.quora.com/How-can-I-learn-Deep-Learning-quickly

https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw

https://www.quora.com/Why-do-neural-networks-need-more-than-one-hidden-layer

bengioy

https://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf

https://www.iro.umontreal.ca/~lisa/pointeurs/TR1312.pdf

http://videolectures.net/deeplearning2015_bengio_theoretical_motivations/

http://www.cs.toronto.edu/~fleet/courses/cifarSchool09/slidesBengio.pdf

https://stats.stackexchange.com/questions/182734/what-is-the-difference-between-a-neural-network-and-a-deep-neural-network?rq=1

Universal Approximation Theorem

https://pdfs.semanticscholar.org/f22f/6972e66bdd2e769fa64b0df0a13063c0c101.pdf

http://www.cs.cmu.edu/~epxing/Class/10715/reading/Kornick_et_al.pdf

「Deep Learning」读书系列分享第四章:数值计算 | 分享总结

Nonlinear Classifiers

https://www.quora.com/In-deep-learning-can-good-results-be-obtained-when-you-use-a-linear-function-in-between-the-hidden-layers

https://www.quora.com/Why-do-neural-networks-need-an-activation-function

https://stackoverflow.com/questions/9782071/why-must-a-nonlinear-activation-function-be-used-in-a-backpropagation-neural-net

http://ai.stanford.edu/~quocle/tutorial1.pdf

http://cs231n.github.io/neural-networks-1/

https://www.quora.com/Why-does-deep-learning-architectures-only-use-the-non-linear-activation-function-in-the-hidden-layers

https://medium.com/@vivek.yadav/how-neural-networks-learn-nonlinear-functions-and-classify-linearly-non-separable-data-22328e7e5be1

https://www.quora.com/What-is-the-ability-of-a-single-neuron-with-a-non-linear-activation-function-Can-it-only-classify-the-input-space-in-two-classes

NN,CNN

https://www.analyticsvidhya.com/blog/2017/04/comparison-between-deep-learning-machine-learning/

https://www.analyticsvidhya.com/blog/2017/06/architecture-of-convolutional-neural-networks-simplified-demystified/

[CV] 通俗理解『卷积』——从傅里叶变换到滤波器

https://zhuanlan.zhihu.com/p/28478034

如何通俗易懂地解释卷积?

https://www.zhihu.com/question/22298352?rf=21686447

http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf

https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/

https://mlnotebook.github.io/post/CNN1/

http://bamos.github.io/2016/08/09/deep-completion/

https://www.analyticsvidhya.com/blog/2016/04/deep-learning-computer-vision-introduction-convolution-neural-networks/

https://www.analyticsvidhya.com/blog/2016/03/introduction-deep-learning-fundamentals-neural-networks/

https://www.analyticsvidhya.com/blog/2017/05/gpus-necessary-for-deep-learning/

Applied Deep Learning - Part 1: Artificial Neural Networks

https://medium.com/towards-data-science/applied-deep-learning-part-1-artificial-neural-networks-d7834f67a4f6

Papaer

dropout ----Hinton

https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf

Neural Network with Unbounded Activation Functions is Universal Approximator

https://arxiv.org/pdf/1505.03654.pdf

Transfer Learning

Paper by Yoshua Bengio (another deep learning pioneer).
Paper by Ali Sharif Razavian.
Paper by Jeff Donahue.
Paper and subsequent paper by Dario Garcia-Gasulla.

overfitting

https://medium.com/towards-data-science/deep-learning-overfitting-846bf5b35e24

名校课程

cs231

http://www.jianshu.com/p/182baeb82c71

https://www.coursera.org/learn/neural-networks

收费视频

https://www.udemy.com/deeplearning/?siteID=mDjthAvMbf0-ZE2EvHFczLauDLzv0OQAKg&LSNPUBID=mDjthAvMbf0

Paper

The Power of Depth for Feedforward Neural Networks

https://arxiv.org/pdf/1512.03965.pdf?platform=hootsuite

Deep Residual Learning for Image Recognition

https://arxiv.org/pdf/1512.03385v1.pdf

Speed/accuracy trade-offs for modern convolutional object detectors

https://arxiv.org/pdf/1611.10012.pdf

Playing Atari with Deep Reinforcement Learning

https://arxiv.org/pdf/1312.5602v1.pdf

Neural Network with Unbounded Activation Functions is Universal Approximator

https://arxiv.org/pdf/1505.03654.pdf

Transfer learning

https://databricks.com/blog/2017/06/06/databricks-vision-simplify-large-scale-deep-learning.html

TensorFlow Object Detection API

https://github.com/tensorflow/models/tree/477ed41e7e4e8a8443bc633846eb01e2182dc68a/object_detection

https://opensource.googleblog.com/2017/06/supercharge-your-computer-vision-models.html

Supercharge your Computer Vision models with the TensorFlow Object Detection API

https://research.googleblog.com/2017/06/supercharge-your-computer-vision-models.html

如何使用TensorFlow API构建视频物体识别系统

https://www.jiqizhixin.com/articles/2017-07-14-5

谷歌开放的TensorFlow Object Detection API 效果如何?对业界有什么影响?

https://www.zhihu.com/question/61173908

https://stackoverflow.com/questions/42364513/how-to-recognise-multiple-objects-in-the-same-image

利用TensorFlow Object Detection API 训练自己的数据集

https://zhuanlan.zhihu.com/p/27469690

谷歌开放的TensorFlow Object Detection API 效果如何?对业界有什么影响?

https://github.com/tensorflow/models/tree/master/research/object_detection/data

https://medium.com/towards-data-science/building-a-toy-detector-with-tensorflow-object-detection-api-63c0fdf2ac95

https://medium.com/towards-data-science/building-a-real-time-object-recognition-app-with-tensorflow-and-opencv-b7a2b4ebdc32

https://medium.com/towards-data-science/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9

https://stackoverflow.com/questions/44973184/train-tensorflow-object-detection-on-own-dataset

https://cloud.google.com/blog/big-data/2017/06/training-an-object-detector-using-cloud-machine-learning-engine

https://medium.com/ilenze-com/object-detection-using-deep-learning-for-advanced-users-part-1-183bbbb08b19

脑科学

https://www.quora.com/What-are-the-parts-of-the-neuron-and-their-function

why deep learning works的更多相关文章

  1. Why Deep Learning Works – Key Insights and Saddle Points

    Why Deep Learning Works – Key Insights and Saddle Points A quality discussion on the theoretical mot ...

  2. Decision Boundaries for Deep Learning and other Machine Learning classifiers

    Decision Boundaries for Deep Learning and other Machine Learning classifiers H2O, one of the leading ...

  3. Growing Pains for Deep Learning

    Growing Pains for Deep Learning Advances in theory and computer hardware have allowed neural network ...

  4. Why deep learning?

    1. 深度学习中网络越深越好么? 理论上说是这样的,因为网络越深,参数也越多,拟合能力也越强(但实际情况是,网络很深的时候,不容易训练,使得表现能力可能并不好). 2. 那么,不同什么深度的网络,在参 ...

  5. Use of Deep Learning in Modern Recommendation System: A Summary of Recent Works(笔记)

    注意:论文中,很多的地方出现baseline,可以理解为参照物的意思,但是在论文中,我们还是直接将它称之为基线,也 就是对照物,参照物. 这片论文中,作者没有去做实际的实验,但是却做了一件很有意义的事 ...

  6. (转) The major advancements in Deep Learning in 2016

    The major advancements in Deep Learning in 2016 Pablo Tue, Dec 6, 2016 in MACHINE LEARNING DEEP LEAR ...

  7. (转) Deep Learning Research Review Week 2: Reinforcement Learning

      Deep Learning Research Review Week 2: Reinforcement Learning 转载自: https://adeshpande3.github.io/ad ...

  8. deep learning 的综述

    从13年11月初开始接触DL,奈何boss忙or 各种问题,对DL理解没有CSDN大神 比如 zouxy09等 深刻,主要是自己觉得没啥进展,感觉荒废时日(丢脸啊,这么久....)开始开文,即为记录自 ...

  9. Deep Learning 16:用自编码器对数据进行降维_读论文“Reducing the Dimensionality of Data with Neural Networks”的笔记

    前言 论文“Reducing the Dimensionality of Data with Neural Networks”是深度学习鼻祖hinton于2006年发表于<SCIENCE > ...

随机推荐

  1. Ignatius and the Princess III HDU - 1028 -生成函数or完全背包计数

    HDU - 1028 step 1:初始化第一个多项式 也就是 由 1的各种方案 组 成 的多项式 初始化系数为 1.临时区 temp初始化 为 0 step 2:遍历后续的n - 1 个 多项式 , ...

  2. Java-从Double类型精度丢失认识BigDecimal

    Java-从Double类型精度丢失认识BigDecimal 参考资料 https://www.jianshu.com/p/07e3eeb90f18 https://zh.wikipedia.org/ ...

  3. Wan Android 项目总结

    Wan Android 项目总结 项目的由来 这个项目也算是自己学习了一段时间的Android以后的一个总结和学习吧,项目采用了Kotlin语言,Api采用的hongyang大神的WanAndroid ...

  4. [MySQL] MySQL联表查询的执行顺序优化查询

    SELECT t4.orgName, t3.projectName, t3.Partner, t1.type, COUNT(DISTINCT t1.imei) AS count FROM `t_tem ...

  5. Codeforces.566E.Restoring Map(构造)

    题目链接 \(Description\) 对于一棵树,定义某个点的邻居集合为所有距离它不超过\(2\)的点的集合(包括它自己). 给定\(n\)及\(n\)个点的邻居集合,要求构造一棵\(n\)个点的 ...

  6. LeetCode(169. 求众数)

    问题描述 给定一个大小为 n 的数组,找到其中的众数.众数是指在数组中出现次数大于 ⌊ n/2 ⌋ 的元素. 你可以假设数组是非空的,并且给定的数组总是存在众数. 示例 1: 输入: [3,2,3] ...

  7. 潭州课堂25班:Ph201805201 django框架 第十二课 自定义中间件,上下文处理,admin后台 (课堂笔记)

    中间件 在项目主目录下的配置文件 在项目主目录下创建文件 写个自定义异常处理 方法1 要让其生效,要在主目录下,的中间件中进行注册 主目录下.该文件名.类名 在进入视图函数之前进行判断,  给 req ...

  8. linux上安装mysql,亲试成功

    安装mysql参考 网址https://blog.csdn.net/a774630093/article/details/79270080 本文更加详细. 1.先检查系统是否装有mysql rpm - ...

  9. 无脑博士的试管们--dfs搜素

    无脑博士有三个容量分别是A,B,C升的试管,A,B,C分别是三个从1到20的整数,最初,A和B试管都是空的,而C试管是装满硫酸铜溶液的.有时,无脑博士把硫酸铜溶液从一个试管倒到另一个试管中,直到被灌试 ...

  10. Linux之nginx反向代理三台web

    作业三:nginx反向代理三台web 实现基于轮询的方式调度三台web,并验证结果 实现基于权重的方式调度三台web,并验证结果 实现基于hash的方式调用三台web,并验证结果 [root@loca ...