1. numpy中的几种矩阵相乘:


# x1: axn, x2:nxb np.dot(x1, x2): axn * nxb np.outer(x1, x2): nx1*1xn # 实质为: np.ravel(x1)*np.ravel(x2) np.multiply(x1, x2): [[x1[0][0]*x2[0][0], x1[0][1]*x2[0][1], ...]

2. Bugs' hometown

Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.

3. Common steps for pre-processing a new dataset are:

- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)
- "Standardize" the data

4. Unstructured data:

Unstructured data is a generic label for describing data that is not contained in a database or some other type of data structure . Unstructured data can be textual or non-textual. Textual unstructured data is generated in media like email messages, PowerPoint presentations, Word documents, collaboration software and instant messages. Non-textual unstructured data is generated in media like JPEG images, MP3 audio files and Flash video files

5. Chapter of Activation Function:

  • Choice of activation function:

    1. If output is either 0 or 1 -- sigmoid for the output layer and the other units on ReLU.
    2. Except for the output layer, tanh does better than sigmoid.
    3. ReLU ---level up--> leaky ReLU.
  • Why are ReLU and leaky ReLU often superior to sigmoid and tanh?

    -- The derivatives of the former ones is much bigger than 0, so the learning would be much faster.

  • A linear hidden layer is more or less useless, yet the activation function is a exception.

6. Regularization:

Initially, \(J(w, b) = \frac{1}{m} * \sum_{i=1}^{m}{L({\hat{Y}^(i), y^{(i)}}) + \frac{\lambda}{2*m}||w||_2^2}\)

  1. L2 regularization: \(\frac{\lambda}{2*m}\sum_{j=1}^{n_x}||w_j||^2 = \frac{\lambda}{2*m}||w||_2\)

    One aspect that tanh is better thatn sigmoid(in terms of regularization) -- When x is very close to 0, the derivative of tanh(x) is almost linear, while that of the sigmoid(x) is alomst 0.

  2. Dropout:

     Method: Make certain values of weights be zeros randomly, just like -- W= np.multiply(W, C), where C is a 0-1 array.

     Matters need attention: Don't use dropout in test procedure -- Time costly, result randomly.

     Work principle:

       Intuition: Can't rely on any one feature, so have to spread out weights(shrinking weights).

       Besides, you can set different rates of "Dropout", like lower ones on more complex layer, which are called "key prop".

  1. Data augmentation:

     Do some operation on your data images, such as flipping, rotation, zooming, etc, without changing their labels, in order to prevent from over-fitting on some aspects, such as the direction of faces, the size of cats.

  1. Early stopping.

7. Solution to "gradient vanishing or exploding":

     Set WL = np.random.randn(shape) * np.sqrt(\(\frac{2}{n^{[L-1]}}\)) if activation_function == "ReLU"

       else: np.random.randn(shape) * np.sqrt(\(\frac{1}{n^{[L-1]}}\)) or np.sqrt(\(\sqrt{\frac{2}{n^{[L-1]}+n^{[L]}}}\))(Xavier initialization)

8. Gradient Checking:

  1. for i in range(len(\(\theta\))):

    to check if (d\(\theta_{approx}[i] = \frac{J(\theta_1, \theta_2, ..., \theta_i+\epsilon, ...) - J(\theta_1, \theta_2, ..., \theta_i-\epsilon, ...)}{2\epsilon}\)) ?= \(d\theta[i] = \frac{\partial{J}}{\partial{\theta_i}}\)

             <==> \(d\theta_{approx} ?= d\theta\)

             <==> \(\frac{||d\theta_{approx} - d\theta||_2}{||d\theta_{approx}||_2+||d\theta||_2}\) in an accent range: \(10^{-7}\) is great, and \(10^{-3}\) is wrong.

  2. Tips:

    • Only to debug, instead of training.
    • If algorithm fails grad check, look at components(\(db^{[L]}, dw^{[L]}\)) to try to identify bug.
    • Remember regularization.
    • Doesn't work together with dropout.
    • Run at random initialization; perhaps again after some training.

9. Exponentially weighted averages:

  Definition: let \(V_{t} = {\beta}V_{t-1} + (1 - \beta)\theta_t\) (_V_s are the averages, and the _\(\theta\)_s are the initial discrete data).

and \(V_{t} = \frac{V_{t}}{1 - {\beta}^t}\) (To correct initial bias).

  Usage: when it comes to this situation:

Since the average of the distance vertical movement is almost zeros, you can use EWA to average it, prevent it from divergence.

  On iteration t:

    Compute dW on the current mini-batch

    \(v_{dW} = {\beta}v_{dW} + (1 - \beta)dW\)

    \(v_{db} = {\beta}v_{db} + (1 - \beta)db\)

    \(W = W - {\alpha}v_{dW}, b = b - {\alpha}v_{db}\)

    Hyperparameters: \(\alpha\), \({\beta}(=0.9)\)

Deep Learning Specialization 笔记的更多相关文章

  1. Deep Learning论文笔记之(四)CNN卷积神经网络推导和实现(转)

    Deep Learning论文笔记之(四)CNN卷积神经网络推导和实现 zouxy09@qq.com http://blog.csdn.net/zouxy09          自己平时看了一些论文, ...

  2. Deep Learning论文笔记之(八)Deep Learning最新综述

    Deep Learning论文笔记之(八)Deep Learning最新综述 zouxy09@qq.com http://blog.csdn.net/zouxy09 自己平时看了一些论文,但老感觉看完 ...

  3. Deep Learning论文笔记之(六)Multi-Stage多级架构分析

    Deep Learning论文笔记之(六)Multi-Stage多级架构分析 zouxy09@qq.com http://blog.csdn.net/zouxy09          自己平时看了一些 ...

  4. 【deep learning学习笔记】注释yusugomori的DA代码 --- dA.h

    DA就是“Denoising Autoencoders”的缩写.继续给yusugomori做注释,边注释边学习.看了一些DA的材料,基本上都在前面“转载”了.学习中间总有个疑问:DA和RBM到底啥区别 ...

  5. Deep Learning论文笔记之(一)K-means特征学习

    Deep Learning论文笔记之(一)K-means特征学习 zouxy09@qq.com http://blog.csdn.net/zouxy09          自己平时看了一些论文,但老感 ...

  6. Deep Learning论文笔记之(三)单层非监督学习网络分析

    Deep Learning论文笔记之(三)单层非监督学习网络分析 zouxy09@qq.com http://blog.csdn.net/zouxy09          自己平时看了一些论文,但老感 ...

  7. Spectral Norm Regularization for Improving the Generalizability of Deep Learning论文笔记

    Spectral Norm Regularization for Improving the Generalizability of Deep Learning论文笔记 2018年12月03日 00: ...

  8. Deep Learning论文笔记之(四)CNN卷积神经网络推导和实现

    https://blog.csdn.net/zouxy09/article/details/9993371 自己平时看了一些论文,但老感觉看完过后就会慢慢的淡忘,某一天重新拾起来的时候又好像没有看过一 ...

  9. [置顶] Deep Learning 学习笔记

    一.文章来由 好久没写原创博客了,一直处于学习新知识的阶段.来新加坡也有一个星期,搞定签证.入学等杂事之后,今天上午与导师确定了接下来的研究任务,我平时基本也是把博客当作联机版的云笔记~~如果有写的不 ...

随机推荐

  1. 《UML与设计原则》--第四小组

    关于设计模式与原则 一.设计模式简介 设计模式描述了软件设计过程中某一类常见问题的一般性的解决方案.而面向对象设计模式描述了面向对象设计过程中特定场景下.类与相互通信的对象之间常见的组织关系. 二.G ...

  2. 手淘架构组最新实践 | iOS基于静态库插桩的⼆进制重排启动优化 抖音研发实践:基于二进制文件重排的解决方案 APP启动速度提升超15% 编译期插桩

    抖音研发实践:基于二进制文件重排的解决方案 APP启动速度提升超15% 原创 Leo 字节跳动技术团队 2019-08-09 https://mp.weixin.qq.com/s/Drmmx5JtjG ...

  3. Pusher Channels Protocol | Pusher docs https://pusher.com/docs/channels/library_auth_reference/pusher-websockets-protocol

    Pusher Channels Protocol | Pusher docs https://pusher.com/docs/channels/library_auth_reference/pushe ...

  4. REST 架构的替代方案 为什么说GraphQL是API的未来?

    Managing enterprise accounts - GitHub Docs https://docs.github.com/en/graphql/guides/managing-enterp ...

  5. gstack pstack strace

    gstack pstack strace 通过进程号 查看 进程的工作目录 Linux神器strace的使用方法及实践 - 知乎 https://zhuanlan.zhihu.com/p/180053 ...

  6. 在不同情况下connect失败和ping不通的数据分析

  7. 【Python网络编程】epoll用法

    epoll发展进程 此处添加一下select.poll历程及其优缺点 原理 使用步骤 Create an epoll object--创建1个epoll对象 Tell the epoll object ...

  8. python基础(数据类型,while,if)

    python基础初识. 1,运行python代码. 在d盘下创建一个t1.py文件内容是: print('hello world') 打开windows命令行输入cmd,确定后 写入代码python ...

  9. Redis 实战 —— 12. 降低内存占用

    简介 降低 Redis 的内存占用有助于减少创建快照和加载快照所需的时间.提升载入 AOF 文件和重写 AOF 文件时的效率.缩短从服务器进行同步所需的时间(快照. AOF 文件重写在 持久化选项 中 ...

  10. vulnhub靶场之AI-WEB1.0渗透记录

    在本机电脑上自行搭建了一个练手的靶场,下面是记录渗透过程 目录 一.确认靶机ip 二.端口&目录扫描 三.查看敏感目录 四.sql注入 五.get shell 六.系统提权 确认靶机ip ka ...