[root@node1 aas]# ls
ch02 ch03 spark-1.2.-bin-hadoop2. spark-1.2.-bin-hadoop2..tgz
[root@node1 aas]# cd spark-1.2.-bin-hadoop2.
[root@node1 spark-1.2.-bin-hadoop2.]# cd ..
[root@node1 aas]# mkdir ch04
[root@node1 aas]# cd ch04
[root@node1 ch04]# ls
[root@node1 ch04]# wget https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.data.gz
---- ::-- https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.data.gz
Resolving archive.ics.uci.edu... 128.195.10.249
Connecting to archive.ics.uci.edu|128.195.10.249|:... connected.
HTTP request sent, awaiting response... OK
Length: (11M) [application/x-gzip]
Saving to: ?.ovtype.data.gz? %[===============================================================================================================================================================>] ,, 2.62M/s in .2s -- :: (2.53 MB/s) - ?.ovtype.data.gz?.saved [/] [root@node1 ch04]# wget https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.info
---- ::-- https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.info
Resolving archive.ics.uci.edu... 128.195.10.249
Connecting to archive.ics.uci.edu|128.195.10.249|:... connected.
HTTP request sent, awaiting response... OK
Length: (14K) [text/plain]
Saving to: ?.ovtype.info? %[===============================================================================================================================================================>] , --.-K/s in .001s -- :: (15.6 MB/s) - ?.ovtype.info?.saved [/] [root@node1 ch04]# wget https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/old_covtype.info
---- ::-- https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/old_covtype.info
Resolving archive.ics.uci.edu... 128.195.10.249
Connecting to archive.ics.uci.edu|128.195.10.249|:... connected.
HTTP request sent, awaiting response... OK
Length: (.7K) [text/plain]
Saving to: ?.ld_covtype.info? %[===============================================================================================================================================================>] , --.-K/s in 0s -- :: (12.7 MB/s) - ?.ld_covtype.info?.saved [/]

将数据放到HDFS上

[root@node1 ch04]# ls
covtype.data.gz covtype.info old_covtype.info
[root@node1 ch04]# gunzip -d covtype.data.gz
[root@node1 ch04]# ll
total
-rw-r--r-- root root Sep covtype.data
-rw-r--r-- root root Apr covtype.info
-rw-r--r-- root root Sep old_covtype.info
[root@node1 ch04]# hdfs dfs -mkdir /user/root/covtype
[root@node1 ch04]# hdfs dfs -put * /user/root/covtype
[root@node1 ch04]# hdfs dfs -ls /user/root/covtype
Found items
-rw-r--r-- root supergroup -- : /user/root/covtype/covtype.data
-rw-r--r-- root supergroup -- : /user/root/covtype/covtype.info
-rw-r--r-- root supergroup -- : /user/root/covtype/old_covtype.info

启动spark-shell

[root@node1 ch04]# ../spark-1.2.-bin-hadoop2./bin/spark-shell --master yarn-client
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.2.
/_/ Using Scala version 2.10. (OpenJDK -Bit Server VM, Java 1.7.0_09-icedtea)
Type in expressions to have them evaluated.
Type :help for more information.
// :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Spark context available as sc.

运行代码4.7节代码:

scala> import org.apache.spark.mllib.linalg._
import org.apache.spark.mllib.linalg._ scala> import org.apache.spark.mllib.regression._
import org.apache.spark.mllib.regression._ scala> scala> val rawData = sc.textFile("hdfs:///user/ds/covtype.data" )
rawData: org.apache.spark.rdd.RDD[String] = hdfs:///user/ds/covtype.data MappedRDD[1] at textFile at <console>:18 scala> val data = rawData.map { line =>
| val values = line.split(',' ).map(_. toDouble)
| val featureVector = Vectors.dense(values. init)
| val label = values.last -
| LabeledPoint(label, featureVector)
| }
data: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint] = MappedRDD[] at map at <console>: scala> val Array(trainData, cvData, testData) = data. randomSplit(Array(0.8, 0.1, 0.1))
trainData: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint] = PartitionwiseSampledRDD[] at randomSplit at <console>:
cvData: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint] = PartitionwiseSampledRDD[] at randomSplit at <console>:
testData: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint] = PartitionwiseSampledRDD[] at randomSplit at <console>: scala> trainData.cache()
res0: trainData.type = PartitionwiseSampledRDD[] at randomSplit at <console>: scala> cvData.cache()
res1: cvData.type = PartitionwiseSampledRDD[] at randomSplit at <console>: scala> testData.cache()
res2: testData.type = PartitionwiseSampledRDD[] at randomSplit at <console>:22

scala> import org.apache.spark.mllib.evaluation._
import org.apache.spark.mllib.evaluation._


scala> import org.apache.spark.mllib.tree._
import org.apache.spark.mllib.tree._


scala> import org.apache.spark.mllib.tree.model._
import org.apache.spark.mllib.tree.model._


scala> import org.apache.spark.rdd._
import org.apache.spark.rdd._


scala> def getMetrics(model: DecisionTreeModel, data: RDD[LabeledPoint]):
| MulticlassMetrics = {
| val predictionsAndLabels = data. map(example =>(model. predict(example. features), example. label))
| new MulticlassMetrics(predictionsAndLabels)
| }
getMetrics: (model: org.apache.spark.mllib.tree.model.DecisionTreeModel, data: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint])org.apache.spark.mllib.evaluation.MulticlassMetrics


scala> val model = DecisionTree.trainClassifier(trainData, 7, Map[Int, Int](), "gini" , 4, 100)
model: org.apache.spark.mllib.tree.model.DecisionTreeModel = DecisionTreeModel classifier of depth 4 with 31 nodes


scala> val metrics = getMetrics(model, cvData)
metrics: org.apache.spark.mllib.evaluation.MulticlassMetrics = org.apache.spark.mllib.evaluation.MulticlassMetrics@5d574c23


scala> metrics. confusionMatrix
res6: org.apache.spark.mllib.linalg.Matrix =
15535.0 5345.0 21.0 0.0 0.0 0.0 392.0
6669.0 20855.0 688.0 0.0 5.0 0.0 47.0
0.0 610.0 2942.0 0.0 0.0 0.0 0.0
0.0 0.0 274.0 0.0 0.0 0.0 0.0
12.0 874.0 57.0 0.0 15.0 0.0 0.0
0.0 446.0 1318.0 0.0 0.0 0.0 0.0
1150.0 19.0 8.0 0.0 0.0 0.0 905.0


scala> metrics.precision
res7: Double = 0.6917696392665028

scala> metrics.precision
res7: Double = 0.6917696392665028

scala> (0 until 7).map(
| cat => (metrics.precision(cat), metrics.recall(cat))
| ).foreach(println)
(0.6648549174013524,0.729582491898746)
(0.7408788944545099,0.7378644211718086)
(0.554257724189902,0.8282657657657657)
(0.0,0.0)
(0.75,0.015657620041753653)
(0.0,0.0)
(0.6733630952380952,0.4346781940441883)

scala> import org.apache.spark.rdd._
import org.apache.spark.rdd._

scala> def classProbabilities(data: RDD[LabeledPoint]): Array[Double] = {
| val countsByCategory = data.map(_.label).countByValue()
| val counts = countsByCategory.toArray.sortBy(_. _1).map(_. _2)
| counts.map(_.toDouble / counts.sum)
| }
classProbabilities: (data: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint])Array[Double]

scala> val trainPriorProbabilities = classProbabilities(trainData)
trainPriorProbabilities: Array[Double] = Array(0.3644680841907762, 0.48778063233452534, 0.06163475731247069, 0.004682046846288574, 0.0163893156379504, 0.029860958700732, 0.035184204977256786)

scala> val cvPriorProbabilities = classProbabilities(cvData)
cvPriorProbabilities: Array[Double] = Array(0.36594084589341264, 0.4857442384037672, 0.061044563218588345, 0.004708955608641105, 0.016464158660869265, 0.03031604997679894, 0.03578118823792256)

scala> trainPriorProbabilities.zip(cvPriorProbabilities).map {
| case (trainProb, cvProb) => trainProb * cvProb
| }.sum
res9: Double = 0.3765289404519721

scala> val evaluations =
| for (impurity <- Array("gini" , "entropy" );
| depth <- Array(1, 20);
| bins <- Array(10, 300))
| yield {
| val model = DecisionTree. trainClassifier(trainData, 7, Map[Int, Int](), impurity, depth, bins)
| val predictionsAndLabels = cvData. map(example =>(model. predict(example. features), example. label))
| val accuracy = new MulticlassMetrics(predictionsAndLabels). precision
| ((impurity, depth, bins), accuracy)
| }
evaluations: Array[((String, Int, Int), Double)] = Array(((gini,1,10),0.6319968377816351), ((gini,1,300),0.6323577431385017), ((gini,20,10),0.889253613350061), ((gini,20,300),0.9074191829790159), ((entropy,1,10),0.4857442384037672), ((entropy,1,300),0.4857442384037672), ((entropy,20,10),0.8946500077336862), ((entropy,20,300),0.9099455204770825))

scala> evaluations.sortBy(_. _2).reverse.foreach(println)
((entropy,20,300),0.9099455204770825)
((gini,20,300),0.9074191829790159)
((entropy,20,10),0.8946500077336862)
((gini,20,10),0.889253613350061)
((gini,1,300),0.6323577431385017)
((gini,1,10),0.6319968377816351)
((entropy,1,300),0.4857442384037672)
((entropy,1,10),0.4857442384037672)

scala> val data = rawData.map { line =>
| val values = line.split(',' ).map(_.toDouble)
| val wilderness = values.slice(10, 14).indexOf(1.0).toDouble
| val soil = values.slice(14, 54).indexOf(1.0).toDouble
| val featureVector =
| Vectors.dense(values.slice(0, 10) :+ wilderness :+ soil)
| val label = values.last - 1
| LabeledPoint(label, featureVector)
| }
data: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint] = MappedRDD[391] at map at <console>:47

scala>

scala> val evaluations =
| for (impurity <- Array("gini" , "entropy" );
| depth <- Array(10, 20, 30);
| bins <- Array(40, 300))
| yield {
| val model = DecisionTree. trainClassifier(trainData, 7, Map(10 -> 4, 11 -> 40),impurity, depth, bins)
| val trainAccuracy = getMetrics(model, trainData). precision
| val cvAccuracy = getMetrics(model, cvData). precision
| ((impurity, depth, bins), (trainAccuracy, cvAccuracy))
| }
evaluations: Array[((String, Int, Int), (Double, Double))] = Array(((gini,10,40),(0.7772542032989496,0.7730420884389984)), ((gini,10,300),(0.7849615065174265,0.7793665251688522)), ((gini,20,40),(0.9393033733975393,0.904480382215959)), ((gini,20,300),(0.9421715574260792,0.904480382215959)), ((gini,30,40),(0.9972329447406585,0.9341089934177738)), ((gini,30,300),(0.9974352022790551,0.9347964321927579)), ((entropy,10,40),(0.7768755083334409,0.7716672108890302)), ((entropy,10,300),(0.7715307452975122,0.7655318198222971)), ((entropy,20,40),(0.9487578374796128,0.9103407977726984)), ((entropy,20,300),(0.9484781196073622,0.9088971763452317)), ((entropy,30,40),(0.998582045555283,0.9374430714764467)), ((entropy,30,300),(0.9990833860493938,0.9413786584632307)))

scala> val forest = RandomForest. trainClassifier(
| trainData, 7, Map(10 -> 4, 11 -> 40), 20,
| "auto" , "entropy" , 30, 300)
forest: org.apache.spark.mllib.tree.model.RandomForestModel =
TreeEnsembleModel classifier with 20 trees

 

AAS代码运行-第4章的更多相关文章

  1. AAS代码运行-第11章-2

    hdfs dfs -ls /user/littlesuccess/AdvancedAnalysisWithSparkhdfs dfs -mkdir /user/littlesuccess/Advanc ...

  2. AAS代码运行-第11章-1

    启动PySpark export IPYTHON= # PySpark也可使用IPython shell pyspark --master yarn --num-executors 发生如下错误: / ...

  3. 20172327 2018-2019-1 《第一行代码Android》第一章学习总结

    学号 2018-2019-1 <第一行代码Android>第一章学习总结 教材学习内容总结 - Android系统架构: 1.Linux内核层 Android系统是基于Linux内核的,这 ...

  4. Hbase集群搭建及所有配置调优参数整理及API代码运行

    最近为了方便开发,在自己的虚拟机上搭建了三节点的Hadoop集群与Hbase集群,hadoop集群的搭建与zookeeper集群这里就不再详细说明,原来的笔记中记录过.这里将hbase配置参数进行相应 ...

  5. 第十章实践——系统级I/O代码运行

    第十章实践——系统级I/O代码运行 实验代码清单如下: 1. cp1——复制一个文件到另一个文件中(两个已经存在的文件) 复制前: 执行后结果 2. setecho.echostate——改变.显示输 ...

  6. 如何加速MATLAB代码运行

    学习笔记 V1.0 2015/4/17 如何加速MATLAB代码运行 概述 本文源于LDPCC的MATLAB代码,即<CCSDS标准的LDPC编译码仿真>.由于代码的问题,在信息位长度很长 ...

  7. Spark菜鸟学习营Day6 分布式代码运行调试

    Spark菜鸟学习营Day6 分布式代码运行调试 作为代码调试,一般会分成两个部分 语法调试,也就是确定能够运行 结果调试,也就是确定程序逻辑的正确 其实这个都离不开运行,所以我们说一下如何让开发的S ...

  8. 监控代码运行时长 -- StopWatch用法例程

    在.net环境下,精确的测量出某段代码运行的时长,在网络通信.串口通信以及异步操作中很有意义.现在做了简单的总结.具体代码如下: (1).首先 using System.Diagnostics; (2 ...

  9. 解决“无法连接到Python代码运行助手。请检查本机的设置”问题

    廖雪峰老师python课程里有个代码运行助手,可以让你在线输入Python代码,然后通过本机运行的一个Python脚本来执行代码,很方便的一个脚本工具,但是很多人用过之后出现了这样的提示:“无法连接到 ...

随机推荐

  1. Photoshop投影和CSS box-shadow转换

    "混合模式":Photoshop提供了各式各样的混合模式,但是CSS3阴影只支持正常模式(normal). "颜色(color)":阴影颜色.对应于CSS3阴影 ...

  2. mac下安装 xampp 无法启动apache (转,留用)

    1.查看端口是否被占用 sudo lsof -i -n   2.用终端运行xampp,查看具体的错误 sudo su /Applications/XAMPP/xamppfiles/xampp star ...

  3. Sublime Text3 快捷键

    选择类 Ctrl+D 选中光标所占的文本,继续操作则会选中下一个相同的文本. Alt+F3 选中文本按下快捷键,即可一次性选择全部的相同文本进行同时编辑.举个栗子:快速选中并更改所有相同的变量名.函数 ...

  4. [MVC_Json序列化]MVC之Json序列化循环引用

    在做MVC项目时,难免会遇到Json序列化循环引用的问题,大致错误如下 错误1:序列化类型为“...”的对象时检测到循环引用. 错误2:Self referencing loop detected f ...

  5. JPA,EclipseLink 缓存机制学习(一) 树节点搜索问题引发的思考

    最近在项目在使用JPA+EclipseLink 的方式进行开发,其中EclipseLink使用版本为2.5.1.遇到一些缓存方面使用不当造成的问题,从本篇开始逐步学习EclipseLink的缓存机制. ...

  6. The The Garbage-First (G1) collector since Oracle JDK 7 update 4 and later releases

    Refer to http://www.oracle.com/technetwork/tutorials/tutorials-1876574.html for detail. 一些内容复制到这儿 Th ...

  7. java获取年份的后两位

    public static String getDate(Date strDate) { String date = null; if (strDate!= null) { Calendar star ...

  8. openssl evp RSA 加密解密

    openssl evp RSA 加密解密 可以直接使用RSA.h 提供的接口 如下测试使用EVP提供的RSA接口 1. EVP提供的RSA 加密解密 主要接口: int EVP_PKEY_encryp ...

  9. liunx ln -s 软连接

    项目中遇到不同项目中上传图片共享问题 解决方法就用到了 liunx的ln -s 的软连接, 用法: liunx ln -s 文件路径 实现共享思路:不同的目录都软连接到同一个目录

  10. js刷新框架子页面的七种方法代码

    面以三个页面分别命名为framedemo.html,top.html,button.html为例来具体说明如何做.其中framedemo.html由上下两个页面组成,代码如下: <!DOCTYP ...