In the previous post I go through basic 1-layer Neural Network with sigmoid activation function, including

  • How to get sigmoid function from a binary classification problem?

  • NN is still an optimization problem, so what's the target to optimize? - cost function

  • How does model learn?- gradient descent

  • Work flow of NN? - Backward/Forward propagation

Now let's get deeper to 2-layers Neural Network, from where you can have as many hidden layers as you want. Also let's try to vectorize everything.

1. The architecture of 2-layers shallow NN

Below is the architecture of 2-layers NN, including input layer, one hidden layer and one output layer. The input layer is not counted.

(1) Forward propagation

In each neuron, there are 2 activities going on after take in the input from previous hidden layer:

  1. a linear transformation of the input
  2. a non-linear activation function applied after

Then the ouput will pass to the next hidden layer as input.

From input layer to output layer, do above computation layer by layer is forward propagation. It tries to map each input \(x \in R^n\) to $ y$.

For each training sample, the forward propagation is defined as following:

\(x \in R^{n*1}\) denotes the input data. In the picture n = 4.

\((w^{[1]} \in R^{k*n},b^{[1]}\in R^{k*1})\) is the parameter in the first hidden layer. Here k = 3.

\((w^{[2]} \in R^{1*k},b^{[2]}\in R^{1*1})\) is the parameter in the output layer. The output is a binary variable with 1 dimension.

\((z^{[1]} \in R^{k*1},z^{[2]}\in R^{1*1})\) is the intermediate output after linear transformation in the hidden and output layer.

\((a^{[1]} \in R^{k*1},a^{[2]}\in R^{1*1})\) is the output from each layer. To make it more generalize we can use \(a^{[0]} \in R^n\) to denote \(x\)

*Here we use \(g(x)\) as activation function for hidden layer, and sigmoid \(\sigma(x)\) for output layer. we will discuss what are the available activation functions \(g(x)\) out there in the following post. What happens in forward propagation is following:

\([1]\) \(z^{[1]} = {w^{[1]}} a^{[0]} + b^{[1]}\)
\([2]\) \(a^{[1]} = g((z^{[1]} ) )\)
\([3]\) \(z^{[2]} = {w^{[2]}} a^{[1]} + b^{[2]}\)
\([4]\) \(a^{[2]} = \sigma(z^{[2]} )\)

(2) Backward propagation

After forward propagation, for each training sample \(x\) is done ,we will have a prediction \(\hat{y}\). Comparing \(\hat{y}\) with \(y\), we then use the error between prediction and real value to update the parameter via gradient descent.

Backward propagation is passing the gradient descent from output layer back to input layer using chain rule like below. The deduction is in the previous post.

\[ \frac{\partial L(a,y)}{\partial w} =
\frac{\partial L(a,y)}{\partial a} \cdot
\frac{\partial a}{\partial z} \cdot
\frac{\partial z}{\partial w}\]

\([4]\) \(dz^{[2]} = a^{[2]} - y\)
\([3]\) \(dw^{[2]} = dz^{[2]} a^{[1]T}\)
\([3]\) \(db^{[2]} = dz^{[2]}\)
\([2]\) \(dz^{[1]} = da^{[1]} * g^{[1]'}(z[1]) = w^{[2]T} dz^{[2]}* g^{[1]'}(z[1])\)
\([1]\) \(dw^{[1]} = dz^{[1]} a^{[0]T}\)
\([1]\) \(db^{[1]} = dz^{[1]}\)

2. Vectorize and Generalize your NN

Let's derive the vectorize representation of the above forward and backward propagation. The usage of vector is to speed up the computation. We will talk about this again in batch gradient descent.

\(w^{[1]},b^{[1]}, w^{[2]}, b^{[2]}\) stays the same. Generally \(w^{[i]}\) has dimension \((h_{i},h_{i-1})\) and \(b^{[i]}\) has dimension \((h_{i},1)\)

\(Z^{[1]} \in R^{k*m}, Z^{[2]} \in R^{1*m}, A^{[0]} \in R^{n*m}, A^{[1]} \in R^{k*m}, A^{[2]}\in R^{1*m}\) where \(A^{[0]}\)is the input vector, each column is one training sample.

(1) Forward propogation

Follow above logic, vectorize representation is below:

\([1]\) \(Z^{[1]} = {w^{[1]}} A^{[0]} + b^{[1]}\)
\([2]\) \(A^{[1]} = g((Z^{[1]} ) )\)
\([3]\) \(Z^{[2]} = {w^{[2]}} A^{[1]} + b^{[2]}\)
\([4]\) \(A^{[2]} = \sigma(Z^{[2]} )\)

Have you noticed that the dimension above is not a exact matched?
\({w^{[1]}} A^{[0]}\) has dimension \((k,m)\), \(b^{[1]}\) has dimension \((k,1)\).
However Python will take care of this for you with Broadcasting. Basically it will replicate the lower dimension to the higher dimension. Here \(b^{[1]}\) will be replicated m times to become \((k,m)\)

(1) Backward propogation

Same as above, backward propogation will be:
\([4]\) \(dZ^{[2]} = A^{[2]} - Y\)
\([3]\) \(dw^{[2]} =\frac{1}{m} dZ^{[2]} A^{[1]T}\)
\([3]\) \(db^{[2]} = \frac{1}{m} \sum{dZ^{[2]}}\)
\([2]\) \(dZ^{[1]} = dA^{[1]} * g^{[1]'}(z[1]) = w^{[2]T} dZ^{[2]}* g^{[1]'}(z[1])\)
\([1]\) \(dw^{[1]} = \frac{1}{m} dZ^{[1]} A^{[0]T}\)
\([1]\) \(db^{[1]} = \frac{1}{m} \sum{dZ^{[1]} }\)

In the next post, I will talk about some other details in NN, like hyper parameter, activation function.

To be continued.


Reference

  1. Ian Goodfellow, Yoshua Bengio, Aaron Conrville, "Deep Learning"
  2. Deeplearning.ai https://www.deeplearning.ai/

DeepLearning - Forard & Backward Propogation的更多相关文章

  1. Deeplearning - Overview of Convolution Neural Network

    Finally pass all the Deeplearning.ai courses in March! I highly recommend it! If you already know th ...

  2. DeepLearning - Regularization

    I have finished the first course in the DeepLearnin.ai series. The assignment is relatively easy, bu ...

  3. Coursera机器学习+deeplearning.ai+斯坦福CS231n

    日志 20170410 Coursera机器学习 2017.11.28 update deeplearning 台大的机器学习课程:台湾大学林轩田和李宏毅机器学习课程 Coursera机器学习 Wee ...

  4. DeepLearning - Overview of Sequence model

    I have had a hard time trying to understand recurrent model. Compared to Ng's deep learning course, ...

  5. back propogation 的线代描述

    参考资料: 算法部分: standfor, ufldl  : http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial 一文弄懂BP:https: ...

  6. DeepLearning Intro - sigmoid and shallow NN

    This is a series of Machine Learning summary note. I will combine the deep learning book with the de ...

  7. 用纯Python实现循环神经网络RNN向前传播过程(吴恩达DeepLearning.ai作业)

    Google TensorFlow程序员点赞的文章!   前言 目录: - 向量表示以及它的维度 - rnn cell - rnn 向前传播 重点关注: - 如何把数据向量化的,它们的维度是怎么来的 ...

  8. 吴恩达DeepLearning.ai的Sequence model作业Dinosaurus Island

    目录 1 问题设置 1.1 数据集和预处理 1.2 概览整个模型 2. 创建模型模块 2.1 在优化循环中梯度裁剪 2.2 采样 3. 构建语言模型 3.1 梯度下降 3.2 训练模型 4. 结论   ...

  9. Sql Server 聚集索引扫描 Scan Direction的两种方式------FORWARD 和 BACKWARD

    最近发现一个分页查询存储过程中的的一个SQL语句,当聚集索引列的排序方式不同的时候,效率差别达到数十倍,让我感到非常吃惊 由此引发出来分页查询的情况下对大表做Clustered Scan的时候, 不同 ...

随机推荐

  1. 一点一点看JDK源码(二)java.util.List

    一点一点看JDK源码(二)java.util.List liuyuhang原创,未经允许进制转载 本文举例使用的是JDK8的API 目录:一点一点看JDK源码(〇) 1.综述 List译为表,一览表, ...

  2. 『ACM C++』 PTA 天梯赛练习集L1 | 044-45

    记录今日刷题 ------------------------------------------------L1-044--------------------------------------- ...

  3. tctip打赏小插件

    tctip是一个js插件,作用是在web网页右侧生成一个打赏浮动窗 使用方法 页面使用(多数人的使用方式) 插件下载地址 第一步,引入js 一般引入min版本,即引入tctip-版本号.min.js文 ...

  4. Dubbo 安装ZooKeeper环境

    一.在Windows 安装ZooKeeper 1.下载ZooKeeper 2.解压,修改ZooKeeper配置文件 复制一份zoo_sample.cfg文件,改名位zoo.cfg,打开编辑,设置数据保 ...

  5. Flask之蓝图的使用

    蓝图,听起来就是一个很宏伟的东西 在Flask中的蓝图 blueprint 也是非常宏伟的 它的作用就是将 功能 与 主服务 分开怎么理解呢? 比如说,你有一个客户管理系统,最开始的时候,只有一个查看 ...

  6. Apache Spark on K8s的安全性和性能优化

    前言 Apache Spark是目前最为流行的大数据计算框架,与Hadoop相比,它是替换MapReduce组件的不二选择,越来越多的企业正在从传统的MapReduce作业调度迁移到Spark上来,S ...

  7. Linux3.5—视屏模块学习与分析

    插入USB摄像头后,我看到了识别出的一些信息,在内核源码中搜到了相关信息: 搜索之后,在uvc_driver.c 帮助文档:linux-3.5/Documentation/video4linux/v4 ...

  8. HTTP报文中的100状态码

    HTTP状态码(status codes)是HTTP协议中,响应报文的起始行中包含的一种服务器用于向客户端说明操作状态的三位数字.例如在一个正常的GET请求完成后,服务器会向客户端返回 HTTP/ O ...

  9. go基础语法-指针

    1.基础定义 golang的指针没有cpp等语言的指针复杂,具体表现在其不可用于运算.只有值传递 语法:var variableName *int = memoryAddr var a = 2 var ...

  10. extjs+MVC4+PetaPoco+AutoFac+AutoMapper后台管理系统(附源码)

    前言 本项目使用的开发环境及技术列举如下:1.开发环境IDE:VS2010+MVC4数据库:SQLServer20082.技术前端:Extjs后端:(1).数据持久层:轻量级ORM框架PetaPoco ...