Machine Learning, Homework 9, Neural Nets
April 15, 2019
Contents
Boston Housing with a Single Layer and R package nnet 1
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Digit Recognition with R package h2o 5
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Boston Housing with a Single Layer and R package nnet
Let’s do a very simple example with single layer neural nets.
We’ll do the Boston housing data with x=lstat and y =mdev so that we have one numeric x and a numeric y.
We’ve used this classic data set a few times so we are very familiar with it.
Let’s get the data, pull off x and y and standardize x.
library(MASS) ## a library of example datasets
attach(Boston)
## standardize lstat
rg = range(Boston$lstat)
lstats = (Boston$lstat-rg[1])/(rg[2]-rg[1])
##make data frame with standardized lstat values sorted for plotting
ddf = data.frame(lstats,medv=Boston$medv)
oo = order(ddf$lstats) #order the data by x, convenient for plotting
ddf = ddf[oo,]
head(ddf)
## lstats medv
## 162 0.000000000 50.0
## 163 0.005242826 50.0
## 41 0.006898455 34.9
## 233 0.020419426 41.7
## 193 0.031456954 36.4
## 205 0.031732892 50.0
And here is the familiar plot:
plot(ddf)
1
0.0 0.2 0.4 0.6 0.8 1.0
10 20 30 40 50
lstats
medv
Let’s fit a simple neural net.
One hidden layer with 5 units (neurons).
library(nnet)
set.seed(14)
nn1 = nnet(medv~lstats,ddf,size=5,decay=.1,linout=T,maxit=1000)
## # weights: 16
## initial value 274435.143486
## iter 10 value 14655.902880
## iter 20 value 13675.210318
## iter 30 value 13618.543249
## iter 40 value 13593.167670
## iter 50 value 13548.561442
## iter 60 value 13545.520754
## iter 70 value 13544.330448
## iter 80 value 13541.583759
## iter 90 value 13540.386199
## iter 100 value 13539.604916
## iter 110 value 13536.860853
## iter 120 value 13535.643158
## iter 130 value 13535.589069
## final value 13535.578458
## converged
summary(nn1)
## a 1-5-1 network with 16 weights
## options were - linear output units decay=0.1
## b->h1 i1->h1
## 1.06 0.69
## b->h2 i1->h2
## 2.38 -38.17
## b->h3 i1->h3
## 2.49 -7.61
## b->h4 i1->h4
## 2.05 0.55
## b->h5 i1->h5
## 2.53 -7.60
## b->o h1->o h2->o h3->o h4->o h5->o
## 4.67 3.64 21.22 9.19 3.48 8.93
Now let’s plot the fit:
yhat1 = predict(nn1,ddf)
plot(ddf)
lines(ddf$lstats,yhat1,lty=1,col="red",lwd=3)
2
0.0 0.2 0.4 0.6 0.8 1.0
10 20 30 40 50
lstats
medv
Notice that you understand exactly how the single layer neural fit did this !!!
Now let’s fit the 5 unit neural net for a set of decay values.

代写Neural Nets作业、代做MASS留学生作业、R程序设计作业代写
Let’s do this in parallel using the R parallel package. This is simple enough that we don’t really need to
speed it up, but we can illustrate the approach. You may want to use if for some of the more complicated
model fits!
library(doParallel) #library for parallel computing
## Loading required package: foreach
## Loading required package: iterators
## Loading required package: parallel
registerDoParallel()
cat("number of workers is: ",getDoParWorkers(),"\n")
## number of workers is: 4
#you could pick the number of workers with:
# registerDoParallel(cores=num) where num is the number of workers.
Now we will use the function foreach to fit neural net models in parallel. First we set up a vector of decay
values to try. Then we use foreach to run the neural net fits. foreach will return a list, with the i
th list
element corresponding to the results obtained in the i
th loop iteration.
decv = c(.5,.1,.01,.005,.0025,.001,.0001,.00001)
#do a parallel loop over decay values
modsL = foreach(i=1:length(decv)) %dopar% {
library(nnet) #I did not have to do this when I was not in Rmarkdown.
set.seed(5*i) #I did have to to this.
nnfit = nnet(medv~lstats,ddf,size=5,decay=decv[i],linout=T,maxit=10000)
nnfit
}
is.list(modsL)
## [1] TRUE
length(modsL)
## [1] 8
The function foreach will launch a bunch of R processes so things like random number seeds may have to be
reset for each process.
Now we can plot all the fits by looping over the list of models.
plot(ddf)
for(i in 1:length(modsL)) {
yhat = predict(modsL[[i]],ddf)
3
lines(ddf$lstats,yhat,col=i,lty=i,lwd=2)
}
0.0 0.2 0.4 0.6 0.8 1.0
10 20 30 40 50
lstats
medv
Problem
fit the neural net model with size=100 and decay=.001, plot the fits. How does it look? Try running
the fit at least twice to see that it changes.
Redo the the loop over decay values with size=100. How does it look now? Do we need 100? Will
decay be more important with 100 than it was with 5 units?
4
Digit Recognition with R package h2o
First, let’s fire up h2o.
print(date())
## [1] "Tue Apr 16 16:22:12 2019"
library(h2o)
##
## ----------------------------------------------------------------------
##
## Your next step is to start H2O:
## > h2o.init()
##
## For H2O package documentation, ask for help:
## > ??h2o
##
## After starting H2O, you can use the Web UI at http://localhost:54321
## For more information visit http://docs.h2o.ai
##
## ----------------------------------------------------------------------
##
## Attaching package: 'h2o'
## The following objects are masked from 'package:stats':
##
## cor, sd, var
## The following objects are masked from 'package:base':
##
## &&, %*%, %in%, ||, apply, as.factor, as.numeric, colnames,
## colnames<-, ifelse, is.character, is.factor, is.numeric, log,
## log10, log1p, log2, round, signif, trunc
h2o.init()
##
## H2O is not running yet, starting it now...
##
## Note: In case of errors look at the following log files:
## /tmp/RtmpzaPmRq/h2o_root_started_from_r.out
## /tmp/RtmpzaPmRq/h2o_root_started_from_r.err
##
##
## Starting H2O JVM and connecting: . Connection successful!
##
## R is connected to the H2O cluster:
## H2O cluster uptime: 1 seconds 203 milliseconds
## H2O cluster timezone: America/Phoenix
## H2O data parsing timezone: UTC
## H2O cluster version: 3.20.0.8
## H2O cluster version age: 6 months and 25 days !!!
## H2O cluster name: H2O_started_from_R_root_jrw534
## H2O cluster total nodes: 1
## H2O cluster total memory: 6.84 GB
## H2O cluster total cores: 8
## H2O cluster allowed cores: 8
## H2O cluster healthy: TRUE
## H2O Connection ip: localhost
## H2O Connection port: 54321
## H2O Connection proxy: NA
## H2O Internal Security: FALSE
## H2O API Extensions: XGBoost, Algos, AutoML, Core V3, Core V4
## R Version: R version 3.5.1 (2018-07-02)
## Warning in h2o.clusterInfo():
## Your H2O cluster version is too old (6 months and 25 days)!
## Please download and install the latest version from http://h2o.ai/download/
Now we can read in the data.
In order to make things run faster I’ll down sample to just ns=10,000 observations.
train60D = read.csv("http://www.rob-mcculloch.org/data/mnist-train.csv")
train60D$C785 = as.factor(train60D$C785)
n = nrow(train60D)
5
set.seed(99)
ns = 10000
trainDS = train60D[sample(1:n,ns),]
trainS = as.h2o(trainDS,"trainS")
##
|
| | 0%
|
|=================================================================| 100%
testD = read.csv("http://www.rob-mcculloch.org/data/mnist-test.csv")
testD$C785 = as.factor(testD$C785)
test = as.h2o(testD,"test")
##
|
| | 0%
|
|=================================================================| 100%
x=1:784;y=785
print(ls())
## [1] "ddf" "decv" "i" "lstats" "modsL" "n"
## [7] "nn1" "ns" "oo" "rg" "test" "testD"
## [13] "train60D" "trainDS" "trainS" "x" "y" "yhat"
## [19] "yhat1"
print(h2o.ls())
## key
## 1 test
## 2 trainS
Let’s run h2o.deeplearning at settings similar to the ones that were found to work in the lecture notes. I
dropped the layer/node architecture down to (50,50) so it would run faster. On my laptap in took about 90
seconds to run the one below.
I don’t know how long it will take on your machine.
fp = file.path("./files","mDNNdrop")
if(file.exists(fp)) {
mDNNdrop = h2o.loadModel(fp)
} else {
tm = system.time({
mDNNdrop = h2o.deeplearning(x,y,training_frame = trainS,
hidden=c(50,50),
activation="TanhWithDropout",
hidden_dropout_ratios=c(.1,.1),
l1=1e-4,
epochs=2000,
model_id="mDNNdrop",
validation_frame=test)
})
}
## Warning in .h2o.startModelJob(algo, params, h2oRestApiVersion): Dropping bad and constant columns: [C646, C645, C644, C365, C760, C51, C53, C52, C55, C54, C57, C56, C59, C58, C533, C253, C60, C703, C702, C701, C700, C1, C422, C2, C784, C3, C420, C783, C4, C782, C5, C143, C781, C6, C142, C780, C7, C141, C8, C9, C674, C673, C672, C393, C84, C83, C86, C85, C88, C87, C729, C728, C727, C726, C169, C561, C281, C11, C10, C12, C15, C617, C616, C17, C16, C19, C18, C699, C732, C731, C730, C450, C170, C20, C22, C21, C24, C23, C26, C25, C28, C505, C27, C29, C589, C225, C588, C31, C30, C32, C35, C759, C758, C757, C756, C755, C754, C115, C753, C477, C113, C112, C111, C197].
##
|
| | 0%
|
| | 1%
|
|= | 1%
|
|= | 2%
|
|== | 2%
|
|== | 3%
|
|=================================================================| 100%
cat("the time is: ",tm,"\n")
## the time is: 0.617 0.005 80.098 0 0
6
print(h2o.confusionMatrix(mDNNdrop,valid=TRUE))
## Confusion Matrix: Row labels: Actual class; Column labels: Predicted class
## 0 1 2 3 4 5 6 7 8 9 Error Rate
## 0 959 0 2 1 0 9 6 1 2 0 0.0214 = 21 / 980
## 1 0 1111 1 7 0 1 5 3 7 0 0.0211 = 24 / 1,135
## 2 20 3 954 14 7 1 10 7 16 0 0.0756 = 78 / 1,032
## 3 0 1 18 946 0 15 2 16 9 3 0.0634 = 64 / 1,010
## 4 1 0 3 0 938 1 11 4 3 21 0.0448 = 44 / 982
## 5 6 2 5 33 10 775 15 11 29 6 0.1312 = 117 / 892
## 6 10 4 5 1 9 12 911 3 3 0 0.0491 = 47 / 958
## 7 3 5 19 8 8 0 2 964 1 18 0.0623 = 64 / 1,028
## 8 11 6 7 23 12 13 10 11 874 7 0.1027 = 100 / 974
## 9 7 6 3 12 33 8 2 22 4 912 0.0961 = 97 / 1,009
## Totals 1017 1138 1017 1045 1017 835 974 1042 948 967 0.0656 = 656 / 10,000
missclass = h2o.performance(mDNNdrop,valid=TRUE)@metrics$mean_per_class_error
cat("the mean per class error is: ",missclass,"\n")
## the mean per class error is: 0.06676157
## if you like it, keep it
#h2o.saveModel(mDNNdrop,path="./files")
print(date())
## [1] "Tue Apr 16 16:25:01 2019"
Problem
I always used dropout. Is that a good idea? Change the settings to not use dropout. Is it worse or
better? Do a couple of runs.
look at h2o.deeplearning. Pick another option and try changing it to see if you can improve the
prediction.

因为专业,所以值得信赖。如有需要,请加QQ:99515681 或邮箱:99515681@qq.com

微信:codinghelp

Machine Learning, Homework 9, Neural Nets的更多相关文章

  1. CheeseZH: Stanford University: Machine Learning Ex4:Training Neural Network(Backpropagation Algorithm)

    1. Feedforward and cost function; 2.Regularized cost function: 3.Sigmoid gradient The gradient for t ...

  2. Machine Learning No.5: Neural networks

    1. advantage: when number of features is too large, so previous algorithm is not a good way to learn ...

  3. [Machine Learning]学习笔记-Neural Networks

    引子 对于一个特征数比较大的非线性分类问题,如果采用先前的回归算法,需要很多相关量和高阶量作为输入,算法的时间复杂度就会很大,还有可能会产生过拟合问题,如下图: 这时就可以选择采用神经网络算法. 神经 ...

  4. [C5] Andrew Ng - Structuring Machine Learning Projects

    About this Course You will learn how to build a successful machine learning project. If you aspire t ...

  5. What is machine learning?

    What is machine learning? One area of technology that is helping improve the services that we use on ...

  6. 使用神经网络识别手写数字Using neural nets to recognize handwritten digits

    The human visual system is one of the wonders of the world. Consider the following sequence of handw ...

  7. Machine Learning and Data Mining(机器学习与数据挖掘)

    Problems[show] Classification Clustering Regression Anomaly detection Association rules Reinforcemen ...

  8. [Hinton] Neural Networks for Machine Learning - Hopfield Nets and Boltzmann Machine

    Lecture 11 — Hopfield Nets Lecture 12 — Boltzmann machine learning Ref: 能量模型(EBM).限制波尔兹曼机(RBM) 高大上的模 ...

  9. [Hinton] Neural Networks for Machine Learning - Basic

    Link: Neural Networks for Machine Learning - 多伦多大学 Link: Hinton的CSC321课程笔记1 Link: Hinton的CSC321课程笔记2 ...

随机推荐

  1. ubuntu 16.04扩充root 分区

    ubuntu使用过程中,提示root分区剩余空间不足,剩余200多M时还可以进行一些操作,剩余几M时拷贝等命令都不能够执行. 扩充root分区步骤如下: 1.查看root分区所在位置: 命令: sud ...

  2. FastDFS使用

    1.在linux系统中安装FastDFS服务image-server.7z 2.导入FastDFS jar包 fastdfs_client_v1.20.jar 3.创建配置文件fastdfs_clie ...

  3. 网络抓包教程之tcpdump

    现在的移动端应用几乎都会通过网络请求来和服务器交互,通过抓包来诊断和网络相关的bug是程序员的重要技能之一.抓包的手段有很多:针对http和https可以使用Charles设置代理来做,对于更广泛的协 ...

  4. sublime自动保存设置

    首选项——用户设置 (Preferences:Settings - User) 行末添加"save_on_focus_lost": true 注意用逗号分隔 保存即可 save_o ...

  5. bootstrap轮播图 两侧半透明阴影

    用bootstrap轮播图:Carousel插件,图片两侧影音实在碍眼,想去掉,首先发现有css里由opacity: 0.5这个东西来控制,全部改成opacity: 0.0,发现指示箭头也看不见了. ...

  6. poj3126 Prime Path 广搜bfs

    题目: The ministers of the cabinet were quite upset by the message from the Chief of Security stating ...

  7. memcached笔记

    启动memcached:./memcached -d -m 10 -l 127.0.0.1 -p 11211 -u root 连接memcached:telnet 127.0.0.1 11211 查看 ...

  8. Kali linux2.0里Metasploit的postgresql selected, no connection问题解决

    说在前面的话 1.在kali中metasploit默认使用postgresql作为它的数据库: 想要开启metasploit服务首先得打开postgresql数据库, 命令如下:(或者:/etc/in ...

  9. Linux查看设备命令

    系统 # uname -a # 查看内核/操作系统/CPU信息 # head -n 1 /etc/issue # 查看操作系统版本 # cat /proc/cpuinfo # 查看CPU信息 # ho ...

  10. Git Sever搭建与相关错误处理

    搭建 安装git: sudo apt-get install git 创建一个git用户,用来运行git服务:(用自己的用户也可以,其实) sudo adduser git 创建证书登录: 收集所有需 ...