Finally pass all the Deeplearning.ai courses in March! I highly recommend it!

If you already know the basic then you may be interested in course 4 & 5, which shows many interesting cases in CNN and RNN. Although I do think that 1 & 2 is better structured than others, which give me more insight into NN.

I have uploaded the assignment of all the deep learning courses to my GitHub. You can find the assignment for CNN here. Hopefully it can give you some help when you struggle with the grader. For a new course, you indeed need more patience to fight with the grader. Don't ask me how I know this ... >_<

I have finished the summary of the first course in my pervious post:

  1. Sigmoid and shallow NN.
  2. Forward & Backward Propogation,
  3. Regularization

I will keep working on the others. Since I am using CNN at work recently, let's go through CNN first. Any feedback is absolutely welcomed! And please correct me if I make any mistake.


When talking about CNN, image application is usually what comes to our mind first. While actually CNN can be more generally applied to different data that fits certain assumption. what assumption? You will know later.

1. CNN Features

CNN stands out from traditional NN in 3 area:

  • sparse interaction (connection)
  • parameter sharing
  • equivariant representation.

Actually the third feature is more like a result of the first 2 features. Let's go through them one by one.

Fully Connected NN NN with Sparse connection

Sparse interaction, unlike fully connected neural network, for Convolution layer each output is only connected to limited inputs like above. For a hidden layer that takes \(m\) neurons as input and \(n\) neurons as output, a fully connected hidden layer has a weight matrix of size \(m*n\) to compute each output. When \(m\) is very big, the weight can be a huge matrix. With sparse connection, only \(k\) input is connected to each output, leading to a decrease in computation scale from \(O(m*n)\) to \(O(k*n)\). And a decrease in memory usage from \(m*n\) to \(k*n\).

Parameter sharing has more insight when considered together with sparse connection. Because sparse connection creates segmentation among data. For example \(x_1\) \(x_5\) is independent in above plot due to sparse connection. However with parameter sharing, same weight matrix is used across all positions, leading to a hidden connectivity. Additionally, it can further reduces the memory storage of weight matrix from \(k*n\) to \(k\). Especially when dealing with image, from \(m*n\) to \(k\) can be a huge improvement in memory usage.

Equivariant representation is a result of parameter sharing. Because same weight matrix is used at different position across input. So the output is invaritate to parallel movement. Say \(g\) represent parallel shift and \(f\) is the convolution function, then \(f(g(x)) = g(f(x))\). This feature can be very useful when we only care about the presence of feature not their position. But on the other hand this can be a big flaw of CNN that it is not good at detecting position.

2. CNN Components

Given the above 3 features, let's talk about how to implement CNN.

(1).Kernel

Kernel, or so-called filter, is the weight matrix in CNN. It implements element-wise computation across input matrix and output the sum. Kernel usually has a size that is much smaller than the original input so that we can take advantage of decrease in memory.

Below is a 2D input of convolution layer. It can be greyscale image, or multivarate timeseries.

When input is 3D dimension, we call the 3rd dimension Channel(volume). The most common case is the RGB image input, where each channel is a 2D matrix representing one color. See below:

Please keep in mind that Kernel always have same number of channel as input! Therefore it leads to dimension reduction in all dimensions (unless you use 1*1 kernel). But we can have multiple kernels to capture different features. Like below, we have 2 kernels(filters), each has dimension (3,3,3).

Dimension Cheatsheet of Kernel

  • Input Dimension ( n_w, n_h, n_channel ). When n_channel = 1, it is a 2D input.
  • Kernel Dimension ( n_k, n_k, n_channel ). Kernel is not always a square, it can be ( n_k1, n_k2, n_channel )
  • Output Dimension (n_w - n_k + 1, n_h - n_k + 1, 1 )
  • When we have n different kernels, output dimension will be (n_w - n_k + 1, n_h - n_k + 1, n)

(2). Stride

Like we mention before, one key advantage of CNN is to speed up computation using dimension reduction. Can we be more aggressive on this ?! Yes we can use Stride! Basically stride is when moving kernel across input, it skips certain input by certain length.
We can easily tell how stride works by below comparison:
No Stride

Stride = 1

Thanks vdumoulin for such great animation. You can find more at his GitHub

Stride can further speed up computation, but it will lose some feature in the output. We can consider it as output down-sampling.

(3). Padding

Both Kernel and Stride function as dimension reduction technic. So for each convolution layer, the output dimension will always be smaller than input. However if we want to build a deep convolution network, we don't want the input size to shrink too fast. A small kernel can partly solve this problem. But in order to maintain certain dimension we need zero padding. Basically it is adding zero to your input, like below:
Padding = 1

There is a few types of padding that are frequently used:

  • Valid padding: no padding at all, output = input - (K - 1)
  • Same padding: maintain samesize, output = input
  • Full padding: each input is visited k times, output = input + (k - 1)

To summarize, We use \(s\) to denote stride, and \(p\) denotes padding. \(n\) is the input size, \(k\) is kernel size (kernel and input are both square for simplicity). Then output dimension will be following:
\[\lfloor (n+2p-k)/s\rfloor +1\]

(4). Pooling

I remember in a latest paper of CNN, the author says that I can't explain why I add pooling layer, but a good CNN structure always comes with a pooling layer.

Pooling functions as a dimension reduction Technic. But unlike Kernel which reduces all dimensions, pooling keep channel dimension untouched. Therefore it can further accelerate computation.

Basically Pooling outputs a certain statistics for a certain among of input. This introduces a feature stronger than Equivariant representation -- Invariant representation.

The mainly used Pooling is max and average pooling. And there is L2, and weighted average, and etc.

3. CNN structure

(1). Intuition of CNN

In Deep Learning book, author gives a very interesting insight. He consider convolution and pooling as a infinite strong prior distribution. The distribution indicates that all hidden units share the same weight, derived from certain amount of the input and have parallel invariant feature.

Under Bayesian statistics, prior distribution is a subjective preference of the model based on experience. And the stronger the prior distribution is, the higher impact it will have on the optimal model. So before we use CNN, we have to make sure that our data fits the above assumption.

(2). classic structure

A classic convolution neural network has a convolutional layer, a non-linear activation layer, and a pooling layer. For deep NN, we can stack a few convolution layer together. like below

The above plot is taken from Adit Deshpande's A Beginner's Guide To Understanding Convolutional Neural Networks, one of my favorite blogger of ML.

The interesting part of deep CNN is that deep hidden layer can receive more information from input than shallow layer, meaning although the direct connection is sparse, the deeper hidden neuron are still able to receive nearly all the features from input.

(3). To be continue

With learning more and more about NN, I gradually realize that NN is more flexible than I thought. It is like LEGO, convolution, pooling, they are just different basic tools with different assumption. You need to analyze your data and select tools that fits your assumption, and try combining them to improve performance interatively. Later I will open a new post to collect all the NN structure that I ever read about.


Reference
1 Vincent Dumoulin, Francesco Visin - A guide to convolution arithmetic for deep learning (BibTeX)
2 Adit Deshpande - A Beginner's Guide To Understanding Convolutional Neural Networks
3 Ian Goodfellow, Yoshua Bengio, Aaron Conrville - Deep Learning

Deeplearning - Overview of Convolution Neural Network的更多相关文章

  1. Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.1

    3.Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.1 http://blog.csdn.net/sunbow0 ...

  2. Convolution Neural Network (CNN) 原理与实现

    本文结合Deep learning的一个应用,Convolution Neural Network 进行一些基本应用,参考Lecun的Document 0.1进行部分拓展,与结果展示(in pytho ...

  3. Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.2

    3.Spark MLlib Deep Learning Convolution Neural Network(深度学习-卷积神经网络)3.2 http://blog.csdn.net/sunbow0 ...

  4. Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.3

    3.Spark MLlib Deep Learning Convolution Neural Network(深度学习-卷积神经网络)3.3 http://blog.csdn.net/sunbow0 ...

  5. 【面向代码】学习 Deep Learning(三)Convolution Neural Network(CNN)

    ========================================================================================== 最近一直在看Dee ...

  6. keras02 - hello convolution neural network 搭建第一个卷积神经网络

    本项目参考: https://www.bilibili.com/video/av31500120?t=4657 训练代码 # coding: utf-8 # Learning from Mofan a ...

  7. 深度学习:卷积神经网络(convolution neural network)

    (一)卷积神经网络 卷积神经网络最早是由Lecun在1998年提出的. 卷积神经网络通畅使用的三个基本概念为: 1.局部视觉域: 2.权值共享: 3.池化操作. 在卷积神经网络中,局部接受域表明输入图 ...

  8. 斯坦福大学卷积神经网络教程UFLDL Tutorial - Convolutional Neural Network

    Convolutional Neural Network Overview A Convolutional Neural Network (CNN) is comprised of one or mo ...

  9. 论文阅读(Weilin Huang——【TIP2016】Text-Attentional Convolutional Neural Network for Scene Text Detection)

    Weilin Huang--[TIP2015]Text-Attentional Convolutional Neural Network for Scene Text Detection) 目录 作者 ...

随机推荐

  1. html标签种类

    标签 描述 <!--...--> 定义注释. <!DOCTYPE> 定义文档类型. <a> 定义锚. <abbr> 定义缩写. <acronym& ...

  2. 用画布canvas画安卓logo

    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...

  3. 并发编程(二)------并发类容器ConcurrentMap

    并发类容器: jdk5.0以后提供了多种并发类容器来替代同步类容器从而改善性能. 同步类容器的状态都是串行化的. 他们虽然实现了线程安全,但是严重降低了并发性,在多线程环境时,严重降低了应用程序的吞吐 ...

  4. iOS 使约束带动画效果(Animate NSLayoutconstraints)

    http://stackoverflow.com/questions/12926566/are-nslayoutconstraints-animatable http://stackoverflow. ...

  5. ThinkPHP微信扫码支付接口

    最近折腾微信扫码支付,看了微信官方文档,找了很多网页,发现和文档/demo不匹配,现在自己算是弄出来了(文件名称有所更改),贴出来分享一下 一.将有用的官方lib文件和使用的相关文件放置到vendor ...

  6. 水仙花数(类型:一级、C++)

    题目描述: 输入一个三位数n,判断是否为水仙花数,如果是则输出“YES”,不是则输出“NO”.水仙花数:是指一个3位数,它的每个位上的数字的3次幂之和等于它本身.(例如:1^3 + 5^3+ 3^3 ...

  7. 剑指Offer_编程题之用两个栈实现队列

    题目描述 用两个栈来实现一个队列,完成队列的Push和Pop操作. 队列中的元素为int类型.

  8. 偏前端 - ios下position:fixed失效的问题解决

    如图,考虑到用户体验的问题,一般页面的下方提交按钮都会随着固定在页面上,方便用户点击. 有些人肯定就说了,这还不简单,position:fixed: 但是在ios这个坑货系统上这个position:f ...

  9. ajax与jsonp定义及使用方法

    ajax 定义 ajax技术的目的是让javascript发送http请求,与后台通信,获取数据和信息. ajax通信的过程不会影响后续javascript的执行,从而实现异步. 同步和异步 现实生活 ...

  10. vue-cli3 创建选项选择

    1.创建新项目: vue create hello-world 2.选择配置 3.自定义选择配置,需要什么就选什么 4. 是否使用带历史纪录的路由,这里一般是Y 5.预编译器选择什么 6.eslint ...