NEURAL NETWORKS, PART 3: THE NETWORK

We have learned about individual neurons in the previous section, now it’s time to put them together to form an actual neural network.

The idea is quite simple – we line multiple neurons up to form a layer, and connect the output of the first layer to the input of the next layer. Here is an illustration:

Figure 1: Neural network with two hidden layers.

Each red circle in the diagram represents a neuron, and  the blue circles represent fixed values. From left to right, there are four columns: the input layer, two hidden layers, and an output layer. The output from neurons in the previous layer is directed into the input of each of the neurons in the next layer.

We have 3 features (vector space dimensions) in the input layer that we use for learning: x1, x2 and x3. The first hidden layer has 3 neurons, the second one has 2 neurons, and the output layer has 2 output values. The size of these layers is up to you – on complex real-world problems we would use hundreds or thousands of neurons in each layer.

The number of neurons in the output layer depends on the task. For example, if we have a binary classification task (something is true or false), we would only have one neuron. But if we have a large number of possible classes to choose from, our network can have a separate output neuron for each class.

The network in Figure 1 is a deep neural network, meaning that it has two or more hidden layers, allowing the network to learn more complicated patterns. Each neuron in the first hidden layer receives the input signals and learns some pattern or regularity. The second hidden layer, in turn, receives input from these patterns from the first layer, allowing it to learn “patterns of patterns” and higher-level regularities. However, the cost of adding more layers is increased complexity and possibly lower generalisation capability, so finding the right network structure is important.

Implementation

I have implemented a very simple neural network for demonstration. You can find the code here: SimpleNeuralNetwork.java

The first important method is initialiseNetwork(), which sets up the necessary structures:

1
2
3
4
5
6
7
8
9
public void initialiseNetwork(){
    input = new double[1 + M]; // 1 is for the bias
    hidden = new double[1 + H];
    weights1 = new double[1 + M][H];
    weights2 = new double[1 + H];
 
    input[0] = 1.0; // Setting the bias
    hidden[0] = 1.0;
}

M is the number of features in the feature vectors, H is the number of neurons in the hidden layer. We add 1 to these, since we also use the bias constants.

We represent the input and hidden layer as arrays of doubles. For example, hidden[i] stores the current output value of the i-th neuron in the hidden layer.

The first set of weights, between the input and hidden layer, are stored as a matrix. Each of the (1+M) neurons in the input layer connects toH neurons in the hidden layer, leading to a total of (1+M)×Hweights. We only have one output neuron, so the second set of weights between hidden and output layers is technically a (1+H)×1 matrix, but we can just represent that as a vector.

The second important function is forwardPass(), which takes an input vector and performs the computation to reach an output value.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public void forwardPass(){
    for(int j = 1; j < hidden.length; j++){
        hidden[j] = 0.0;
        for(int i = 0; i < input.length; i++){
            hidden[j] += input[i] * weights1[i][j-1];
        }
        hidden[j] = sigmoid(hidden[j]);
    }
 
    output = 0.0;
    for(int i = 0; i < hidden.length; i++){
        output += hidden[i] * weights2[i];
    }
    output = sigmoid(output);
}

The first for-loop calculates the values in the hidden layer, by multiplying the input vector with the weight vector and applying the sigmoid function. The last part calculates the output value by multiplying the hidden values with the second set of weights, and also applying the sigmoid.

Evaluation

To test out this network, I have created a sample dataset using the database at quandl.com. This dataset contains sociodemographic statistics for 141 countries:

  • Population density (per suqare km)
  • Population growth rate (%)
  • Urban population (%)
  • Life expectancy at birth (years)
  • Fertility rate (births per woman)
  • Infant mortality (deaths per 1000 births)
  • Enrolment in tertiary education (%)
  • Unemployment (%)
  • Estimated control of corruption (score)
  • Estimated government effectiveness (score)
  • Internet users (per 100)

Based on this information, we want to train a neural network that can predict whether the GDP per capita is more than average for that country (label 1 if it is, 0 if it’s not).

I’ve separated the dataset for training (121 countries) and testing (40 countries). The values have been normalised, by subtracting the mean and dividing by the standard deviation, using a script from a previous article. I’ve also pre-trained a model that we can load into this network and evaluate. You can download these from here: original data,training datatest datapretrained model.

You can then execute the neural network (remember to compile and link the binaries):

1
java neuralnet.SimpleNeuralNetwork data/model.txt data/countries-classify-gdp-normalised.test.txt

The output should be something like this:

1
2
3
4
5
6
7
8
9
10
Label: 0    Prediction: 0.01
Label: 0    Prediction: 0.00
Label: 1    Prediction: 0.99
Label: 0    Prediction: 0.00
...
Label: 0    Prediction: 0.20
Label: 0    Prediction: 0.01
Label: 1    Prediction: 0.99
Label: 0    Prediction: 0.00
Accuracy: 0.9

The network is in verbose mode, so it prints out the labels and predictions for each test item. At the end, it also prints out the overall accuracy. The test data contains 14 positive and 26 negative examples; a random system would have had accuracy 50%, whereas a biased system would have accuracy 65%. Our network managed 90%, which means it has learned some useful patterns in the data.

In this case we simply loaded a pre-trained model. In the next section, I will describe how to learn this model from some training data.

from: http://www.marekrei.com/blog/neural-networks-part-3-network/

神经网络第三部分:网络Neural Networks, Part 3: The Network的更多相关文章

  1. NEURAL NETWORKS, PART 3: THE NETWORK

    NEURAL NETWORKS, PART 3: THE NETWORK We have learned about individual neurons in the previous sectio ...

  2. 深度学习笔记(三 )Constitutional Neural Networks

    一. 预备知识 包括 Linear Regression, Logistic Regression和 Multi-Layer Neural Network.参考 http://ufldl.stanfo ...

  3. 深度学习笔记 (一) 卷积神经网络基础 (Foundation of Convolutional Neural Networks)

    一.卷积 卷积神经网络(Convolutional Neural Networks)是一种在空间上共享参数的神经网络.使用数层卷积,而不是数层的矩阵相乘.在图像的处理过程中,每一张图片都可以看成一张“ ...

  4. 递归神经网络(RNN,Recurrent Neural Networks)和反向传播的指南 A guide to recurrent neural networks and backpropagation(转载)

    摘要 这篇文章提供了一个关于递归神经网络中某些概念的指南.与前馈网络不同,RNN可能非常敏感,并且适合于过去的输入(be adapted to past inputs).反向传播学习(backprop ...

  5. 卷积神经网络用语句子分类---Convolutional Neural Networks for Sentence Classification 学习笔记

    读了一篇文章,用到卷积神经网络的方法来进行文本分类,故写下一点自己的学习笔记: 本文在事先进行单词向量的学习的基础上,利用卷积神经网络(CNN)进行句子分类,然后通过微调学习任务特定的向量,提高性能. ...

  6. [C4W1] Convolutional Neural Networks - Foundations of Convolutional Neural Networks

    第一周 卷积神经网络(Foundations of Convolutional Neural Networks) 计算机视觉(Computer vision) 计算机视觉是一个飞速发展的一个领域,这多 ...

  7. Neural Networks for Machine Learning by Geoffrey Hinton (1~2)

    机器学习能良好解决的问题 识别模式 识别异常 预測 大脑工作模式 人类有个神经元,每一个包括个权重,带宽要远好于工作站. 神经元的不同类型 Linear (线性)神经元  Binary thresho ...

  8. [C3] Andrew Ng - Neural Networks and Deep Learning

    About this Course If you want to break into cutting-edge AI, this course will help you do so. Deep l ...

  9. 吴恩达《深度学习》-课后测验-第一门课 (Neural Networks and Deep Learning)-Week 3 - Shallow Neural Networks(第三周测验 - 浅层神 经网络)

    Week 3 Quiz - Shallow Neural Networks(第三周测验 - 浅层神经网络) \1. Which of the following are true? (Check al ...

随机推荐

  1. JMS概述

    [1.面向消息的中间件]顾名思义,面向消息的中间件就是通过使用消息(而不是命令)将企业内的组件连接起来的系统.例如库存系统可能会与工资和会计系统进行通信,如果使用面向消息的中间件将他们连接在一起,就可 ...

  2. ALTER TABLE causes auto_increment resulting key 'PRIMARY'

    修改表为主键的自动增长值时,报出以下错误:mysql> ALTER TABLE YOON CHANGE COLUMN id id INT(11) NOT NULL AUTO_INCREMENT ...

  3. Python中的除法

    在C/C++语言对于整形数执行除法会进行地板除(舍去小数部分).例如 int a=15/10; a的结果为1. 同样的在Java中也是如此,所以两个int型的数据相除需要返回一个浮点型数据的时候就需要 ...

  4. 关于Oracle表空间数据文件自增长的一些默认选项

    昨天,一个同事请教了一些关于Oracle表空间数据文件自增长的问题,解答过程中顺便整理起来,以后其他同事有同样的疑问时可以直接查阅. 实验内容: 创建MYTEST表空间,默认不开启自增长. 给MYTE ...

  5. CP="CAO PSA OUR" 用P3P header解决iframe跨域访问cookie

    1.IE浏览器iframe跨域丢失Session问题 在开发中,我们经常会遇到使用Frame来工作,而且有时是为了跟其他网站集成,应用到多域的情况下,而Iframe是不能保存Session的因此,网上 ...

  6. iOS 进阶 第五天(0330)

    0330 cell的一些常见属性 设置cell右边指示器的类型 设置cell右边指示器的view cell的backgroundView和selectedBackgroundView cell的bac ...

  7. [Android Training视频系列] 8.1 Controlling Your App’s Volume and Playback

    主要内容:1 鉴别使用的是哪个音频流2 使用物理音量键控制应用程序的音量 3 使用物理播放控制键来控制应用程序的音频播放 视频讲解:http://www.eyeandroid.com/thread-1 ...

  8. python学习小结7:变量类型

    变量存储在内存中的值.这就意味着在创建变量时会在内存中开辟一个空间. 基于变量的数据类型,解释器会分配指定内存,并决定什么数据可以被存储在内存中. 因此,变量可以指定不同的数据类型,这些变量可以存储整 ...

  9. 《Power》读书笔记

    原创作品 版权所有 转载请注明出处! 序言 权力是“争”来的,不是“等”来的. 会计.工商管理.营销和销售部门.财务人员(背景).企业咨询小组 在位晋升而竞争的时候,对于公平竞争原则,有些人会采取变通 ...

  10. C#的配置文件App.config使用总结 - 转

    http://blog.csdn.net/celte/article/details/9749389 首先,先说明,我使用的app.config 配置文件的格式如下: <?xml version ...