[root@node1 aas]# ls
ch02 ch03 spark-1.2.-bin-hadoop2. spark-1.2.-bin-hadoop2..tgz
[root@node1 aas]# cd spark-1.2.-bin-hadoop2.
[root@node1 spark-1.2.-bin-hadoop2.]# cd ..
[root@node1 aas]# mkdir ch04
[root@node1 aas]# cd ch04
[root@node1 ch04]# ls
[root@node1 ch04]# wget https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.data.gz
---- ::-- https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.data.gz
Resolving archive.ics.uci.edu... 128.195.10.249
Connecting to archive.ics.uci.edu|128.195.10.249|:... connected.
HTTP request sent, awaiting response... OK
Length: (11M) [application/x-gzip]
Saving to: ?.ovtype.data.gz? %[===============================================================================================================================================================>] ,, 2.62M/s in .2s -- :: (2.53 MB/s) - ?.ovtype.data.gz?.saved [/] [root@node1 ch04]# wget https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.info
---- ::-- https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.info
Resolving archive.ics.uci.edu... 128.195.10.249
Connecting to archive.ics.uci.edu|128.195.10.249|:... connected.
HTTP request sent, awaiting response... OK
Length: (14K) [text/plain]
Saving to: ?.ovtype.info? %[===============================================================================================================================================================>] , --.-K/s in .001s -- :: (15.6 MB/s) - ?.ovtype.info?.saved [/] [root@node1 ch04]# wget https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/old_covtype.info
---- ::-- https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/old_covtype.info
Resolving archive.ics.uci.edu... 128.195.10.249
Connecting to archive.ics.uci.edu|128.195.10.249|:... connected.
HTTP request sent, awaiting response... OK
Length: (.7K) [text/plain]
Saving to: ?.ld_covtype.info? %[===============================================================================================================================================================>] , --.-K/s in 0s -- :: (12.7 MB/s) - ?.ld_covtype.info?.saved [/]

将数据放到HDFS上

[root@node1 ch04]# ls
covtype.data.gz covtype.info old_covtype.info
[root@node1 ch04]# gunzip -d covtype.data.gz
[root@node1 ch04]# ll
total
-rw-r--r-- root root Sep covtype.data
-rw-r--r-- root root Apr covtype.info
-rw-r--r-- root root Sep old_covtype.info
[root@node1 ch04]# hdfs dfs -mkdir /user/root/covtype
[root@node1 ch04]# hdfs dfs -put * /user/root/covtype
[root@node1 ch04]# hdfs dfs -ls /user/root/covtype
Found items
-rw-r--r-- root supergroup -- : /user/root/covtype/covtype.data
-rw-r--r-- root supergroup -- : /user/root/covtype/covtype.info
-rw-r--r-- root supergroup -- : /user/root/covtype/old_covtype.info

启动spark-shell

[root@node1 ch04]# ../spark-1.2.-bin-hadoop2./bin/spark-shell --master yarn-client
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.2.
/_/ Using Scala version 2.10. (OpenJDK -Bit Server VM, Java 1.7.0_09-icedtea)
Type in expressions to have them evaluated.
Type :help for more information.
// :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Spark context available as sc.

运行代码4.7节代码:

scala> import org.apache.spark.mllib.linalg._
import org.apache.spark.mllib.linalg._ scala> import org.apache.spark.mllib.regression._
import org.apache.spark.mllib.regression._ scala> scala> val rawData = sc.textFile("hdfs:///user/ds/covtype.data" )
rawData: org.apache.spark.rdd.RDD[String] = hdfs:///user/ds/covtype.data MappedRDD[1] at textFile at <console>:18 scala> val data = rawData.map { line =>
| val values = line.split(',' ).map(_. toDouble)
| val featureVector = Vectors.dense(values. init)
| val label = values.last -
| LabeledPoint(label, featureVector)
| }
data: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint] = MappedRDD[] at map at <console>: scala> val Array(trainData, cvData, testData) = data. randomSplit(Array(0.8, 0.1, 0.1))
trainData: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint] = PartitionwiseSampledRDD[] at randomSplit at <console>:
cvData: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint] = PartitionwiseSampledRDD[] at randomSplit at <console>:
testData: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint] = PartitionwiseSampledRDD[] at randomSplit at <console>: scala> trainData.cache()
res0: trainData.type = PartitionwiseSampledRDD[] at randomSplit at <console>: scala> cvData.cache()
res1: cvData.type = PartitionwiseSampledRDD[] at randomSplit at <console>: scala> testData.cache()
res2: testData.type = PartitionwiseSampledRDD[] at randomSplit at <console>:22

scala> import org.apache.spark.mllib.evaluation._
import org.apache.spark.mllib.evaluation._


scala> import org.apache.spark.mllib.tree._
import org.apache.spark.mllib.tree._


scala> import org.apache.spark.mllib.tree.model._
import org.apache.spark.mllib.tree.model._


scala> import org.apache.spark.rdd._
import org.apache.spark.rdd._


scala> def getMetrics(model: DecisionTreeModel, data: RDD[LabeledPoint]):
| MulticlassMetrics = {
| val predictionsAndLabels = data. map(example =>(model. predict(example. features), example. label))
| new MulticlassMetrics(predictionsAndLabels)
| }
getMetrics: (model: org.apache.spark.mllib.tree.model.DecisionTreeModel, data: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint])org.apache.spark.mllib.evaluation.MulticlassMetrics


scala> val model = DecisionTree.trainClassifier(trainData, 7, Map[Int, Int](), "gini" , 4, 100)
model: org.apache.spark.mllib.tree.model.DecisionTreeModel = DecisionTreeModel classifier of depth 4 with 31 nodes


scala> val metrics = getMetrics(model, cvData)
metrics: org.apache.spark.mllib.evaluation.MulticlassMetrics = org.apache.spark.mllib.evaluation.MulticlassMetrics@5d574c23


scala> metrics. confusionMatrix
res6: org.apache.spark.mllib.linalg.Matrix =
15535.0 5345.0 21.0 0.0 0.0 0.0 392.0
6669.0 20855.0 688.0 0.0 5.0 0.0 47.0
0.0 610.0 2942.0 0.0 0.0 0.0 0.0
0.0 0.0 274.0 0.0 0.0 0.0 0.0
12.0 874.0 57.0 0.0 15.0 0.0 0.0
0.0 446.0 1318.0 0.0 0.0 0.0 0.0
1150.0 19.0 8.0 0.0 0.0 0.0 905.0


scala> metrics.precision
res7: Double = 0.6917696392665028

scala> metrics.precision
res7: Double = 0.6917696392665028

scala> (0 until 7).map(
| cat => (metrics.precision(cat), metrics.recall(cat))
| ).foreach(println)
(0.6648549174013524,0.729582491898746)
(0.7408788944545099,0.7378644211718086)
(0.554257724189902,0.8282657657657657)
(0.0,0.0)
(0.75,0.015657620041753653)
(0.0,0.0)
(0.6733630952380952,0.4346781940441883)

scala> import org.apache.spark.rdd._
import org.apache.spark.rdd._

scala> def classProbabilities(data: RDD[LabeledPoint]): Array[Double] = {
| val countsByCategory = data.map(_.label).countByValue()
| val counts = countsByCategory.toArray.sortBy(_. _1).map(_. _2)
| counts.map(_.toDouble / counts.sum)
| }
classProbabilities: (data: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint])Array[Double]

scala> val trainPriorProbabilities = classProbabilities(trainData)
trainPriorProbabilities: Array[Double] = Array(0.3644680841907762, 0.48778063233452534, 0.06163475731247069, 0.004682046846288574, 0.0163893156379504, 0.029860958700732, 0.035184204977256786)

scala> val cvPriorProbabilities = classProbabilities(cvData)
cvPriorProbabilities: Array[Double] = Array(0.36594084589341264, 0.4857442384037672, 0.061044563218588345, 0.004708955608641105, 0.016464158660869265, 0.03031604997679894, 0.03578118823792256)

scala> trainPriorProbabilities.zip(cvPriorProbabilities).map {
| case (trainProb, cvProb) => trainProb * cvProb
| }.sum
res9: Double = 0.3765289404519721

scala> val evaluations =
| for (impurity <- Array("gini" , "entropy" );
| depth <- Array(1, 20);
| bins <- Array(10, 300))
| yield {
| val model = DecisionTree. trainClassifier(trainData, 7, Map[Int, Int](), impurity, depth, bins)
| val predictionsAndLabels = cvData. map(example =>(model. predict(example. features), example. label))
| val accuracy = new MulticlassMetrics(predictionsAndLabels). precision
| ((impurity, depth, bins), accuracy)
| }
evaluations: Array[((String, Int, Int), Double)] = Array(((gini,1,10),0.6319968377816351), ((gini,1,300),0.6323577431385017), ((gini,20,10),0.889253613350061), ((gini,20,300),0.9074191829790159), ((entropy,1,10),0.4857442384037672), ((entropy,1,300),0.4857442384037672), ((entropy,20,10),0.8946500077336862), ((entropy,20,300),0.9099455204770825))

scala> evaluations.sortBy(_. _2).reverse.foreach(println)
((entropy,20,300),0.9099455204770825)
((gini,20,300),0.9074191829790159)
((entropy,20,10),0.8946500077336862)
((gini,20,10),0.889253613350061)
((gini,1,300),0.6323577431385017)
((gini,1,10),0.6319968377816351)
((entropy,1,300),0.4857442384037672)
((entropy,1,10),0.4857442384037672)

scala> val data = rawData.map { line =>
| val values = line.split(',' ).map(_.toDouble)
| val wilderness = values.slice(10, 14).indexOf(1.0).toDouble
| val soil = values.slice(14, 54).indexOf(1.0).toDouble
| val featureVector =
| Vectors.dense(values.slice(0, 10) :+ wilderness :+ soil)
| val label = values.last - 1
| LabeledPoint(label, featureVector)
| }
data: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint] = MappedRDD[391] at map at <console>:47

scala>

scala> val evaluations =
| for (impurity <- Array("gini" , "entropy" );
| depth <- Array(10, 20, 30);
| bins <- Array(40, 300))
| yield {
| val model = DecisionTree. trainClassifier(trainData, 7, Map(10 -> 4, 11 -> 40),impurity, depth, bins)
| val trainAccuracy = getMetrics(model, trainData). precision
| val cvAccuracy = getMetrics(model, cvData). precision
| ((impurity, depth, bins), (trainAccuracy, cvAccuracy))
| }
evaluations: Array[((String, Int, Int), (Double, Double))] = Array(((gini,10,40),(0.7772542032989496,0.7730420884389984)), ((gini,10,300),(0.7849615065174265,0.7793665251688522)), ((gini,20,40),(0.9393033733975393,0.904480382215959)), ((gini,20,300),(0.9421715574260792,0.904480382215959)), ((gini,30,40),(0.9972329447406585,0.9341089934177738)), ((gini,30,300),(0.9974352022790551,0.9347964321927579)), ((entropy,10,40),(0.7768755083334409,0.7716672108890302)), ((entropy,10,300),(0.7715307452975122,0.7655318198222971)), ((entropy,20,40),(0.9487578374796128,0.9103407977726984)), ((entropy,20,300),(0.9484781196073622,0.9088971763452317)), ((entropy,30,40),(0.998582045555283,0.9374430714764467)), ((entropy,30,300),(0.9990833860493938,0.9413786584632307)))

scala> val forest = RandomForest. trainClassifier(
| trainData, 7, Map(10 -> 4, 11 -> 40), 20,
| "auto" , "entropy" , 30, 300)
forest: org.apache.spark.mllib.tree.model.RandomForestModel =
TreeEnsembleModel classifier with 20 trees

 

AAS代码运行-第4章的更多相关文章

  1. AAS代码运行-第11章-2

    hdfs dfs -ls /user/littlesuccess/AdvancedAnalysisWithSparkhdfs dfs -mkdir /user/littlesuccess/Advanc ...

  2. AAS代码运行-第11章-1

    启动PySpark export IPYTHON= # PySpark也可使用IPython shell pyspark --master yarn --num-executors 发生如下错误: / ...

  3. 20172327 2018-2019-1 《第一行代码Android》第一章学习总结

    学号 2018-2019-1 <第一行代码Android>第一章学习总结 教材学习内容总结 - Android系统架构: 1.Linux内核层 Android系统是基于Linux内核的,这 ...

  4. Hbase集群搭建及所有配置调优参数整理及API代码运行

    最近为了方便开发,在自己的虚拟机上搭建了三节点的Hadoop集群与Hbase集群,hadoop集群的搭建与zookeeper集群这里就不再详细说明,原来的笔记中记录过.这里将hbase配置参数进行相应 ...

  5. 第十章实践——系统级I/O代码运行

    第十章实践——系统级I/O代码运行 实验代码清单如下: 1. cp1——复制一个文件到另一个文件中(两个已经存在的文件) 复制前: 执行后结果 2. setecho.echostate——改变.显示输 ...

  6. 如何加速MATLAB代码运行

    学习笔记 V1.0 2015/4/17 如何加速MATLAB代码运行 概述 本文源于LDPCC的MATLAB代码,即<CCSDS标准的LDPC编译码仿真>.由于代码的问题,在信息位长度很长 ...

  7. Spark菜鸟学习营Day6 分布式代码运行调试

    Spark菜鸟学习营Day6 分布式代码运行调试 作为代码调试,一般会分成两个部分 语法调试,也就是确定能够运行 结果调试,也就是确定程序逻辑的正确 其实这个都离不开运行,所以我们说一下如何让开发的S ...

  8. 监控代码运行时长 -- StopWatch用法例程

    在.net环境下,精确的测量出某段代码运行的时长,在网络通信.串口通信以及异步操作中很有意义.现在做了简单的总结.具体代码如下: (1).首先 using System.Diagnostics; (2 ...

  9. 解决“无法连接到Python代码运行助手。请检查本机的设置”问题

    廖雪峰老师python课程里有个代码运行助手,可以让你在线输入Python代码,然后通过本机运行的一个Python脚本来执行代码,很方便的一个脚本工具,但是很多人用过之后出现了这样的提示:“无法连接到 ...

随机推荐

  1. Java Web之会话管理二:Session

    一.Session 在web开发中,服务器可以为每个yoghurt浏览器创建一个会话对象(Session)对象.注意:一个浏览器独占一个Session对象.因此,在需要保存用户数据时,服务器程序可以把 ...

  2. UVA Open Credit System Uva 11078

    题目大意:给长度N的A1.....An 求(Ai-Aj)MAX 枚举n^2 其实动态维护最大值就好了 #include<iostream> #include<cstdio> u ...

  3. 使用rem来开发你的移动端网站

    what is rem ? )css3中的计量元素大小的单位,类似px.pt.em. )一种相对根元素font-size的计算方式.1rem = <html>'s font-size px ...

  4. 【转】linux下安装ssh服务器端及ssh的安全配置

    一.在服务器上安装ssh的服务器端. $ sudo apt-get install openssh-server 2. 启动ssh-server. $ /etc/init.d/sshrestart 3 ...

  5. goroutine

    Go语言从诞生到普及已经三年了,先行者大都是Web开发的背景,也有了一些普及型的书籍,可系统开发背景的人在学习这些书籍的时候,总有语焉不详的感觉,网上也有若干流传甚广的文章,可其中或多或少总有些与事实 ...

  6. C# 基础知识总结

    要学好C#,基础知识的重要性不言而喻,现将常用到的一些基础进行总结,总结如下: 1. 数据类型转换: 强制类型转换(Chart--> int):  char cr='A';   int i = ...

  7. jquery阻止元素冒泡的两种方法

    通常情况下,如果给父元素添加事件之后,子元素也会继承同样的事件,这个时候就要阻止子元素的这种行为,成为阻止冒泡,总结两种解决方法: html代码: <div id="parent&qu ...

  8. Vs2010在C#类文件头部添加文件注释的方法

    步骤: 1.VS2010 中找到(安装盘符以C盘为例) 32位操作系统路径:C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\Item ...

  9. html文件里引入文件html文件

    导入通用的代码除了使用php外 iframe在很多界面使用起来比较方便 比如说要写导航 在好几个界面都要用这个导航 可以用iframe引用 实例:这个header.html是我写的一个导航界面 在in ...

  10. ios基础篇(二十一)—— UIImagePickerController类

    UIImagePickerController简述: UIImagePickerController 类是获取选择图片和视频的用户接口,我们可以用UIImagePickerController选择我们 ...