Spark机器学习9· 实时机器学习(scala with sbt)
1 在线学习
模型随着接收的新消息,不断更新自己;而不是像离线训练一次次重新训练。
2 Spark Streaming
- 离散化流(DStream)
输入源:Akka actors、消息队列、Flume、Kafka、……
http://spark.apache.org/docs/latest/streaming-programming-guide.html
类群(lineage):应用到RDD上的转换算子和执行算子的集合
3 MLib+Streaming应用
3.0 build.sbt
依赖Spark MLlib和Spark Streaming
name := "scala-spark-streaming-app"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies += "org.apache.spark" %% "spark-mllib" % "1.5.1"
libraryDependencies += "org.apache.spark" %% "spark-streaming" % "1.5.1"
使用国内镜像仓库
~/.sbt/repositories
[repositories]
local
osc: http://maven.oschina.net/content/groups/public/
typesafe: http://repo.typesafe.com/typesafe/ivy-releases/, [organization]/[module]/(scala_[scalaVersion]/)(sbt_[sbtVersion]/)[revision]/[type]s/[artifact](-[classifier]).[ext], bootOnly
sonatype-oss-releases
maven-central
sonatype-oss-snapshots
3.1 生产消息
object StreamingProducer {
def main(args: Array[String]) {
val random = new Random()
// Maximum number of events per second
val MaxEvents = 6
// Read the list of possible names
val namesResource = this.getClass.getResourceAsStream("/names.csv")
val names = scala.io.Source.fromInputStream(namesResource)
.getLines()
.toList
.head
.split(",")
.toSeq
// Generate a sequence of possible products
val products = Seq(
"iPhone Cover" -> 9.99,
"Headphones" -> 5.49,
"Samsung Galaxy Cover" -> 8.95,
"iPad Cover" -> 7.49
)
/** Generate a number of random product events */
def generateProductEvents(n: Int) = {
(1 to n).map { i =>
val (product, price) = products(random.nextInt(products.size))
val user = random.shuffle(names).head
(user, product, price)
}
}
// create a network producer
val listener = new ServerSocket(9999)
println("Listening on port: 9999")
while (true) {
val socket = listener.accept()
new Thread() {
override def run = {
println("Got client connected from: " + socket.getInetAddress)
val out = new PrintWriter(socket.getOutputStream(), true)
while (true) {
Thread.sleep(1000)
val num = random.nextInt(MaxEvents)
val productEvents = generateProductEvents(num)
productEvents.foreach{ event =>
out.write(event.productIterator.mkString(","))
out.write("\n")
}
out.flush()
println(s"Created $num events...")
}
socket.close()
}
}.start()
}
}
}
sbt run
Multiple main classes detected, select one to run:
[1] MonitoringStreamingModel
[2] SimpleStreamingApp
[3] SimpleStreamingModel
[4] StreamingAnalyticsApp
[5] StreamingModelProducer
[6] StreamingProducer
[7] StreamingStateApp
Enter number: 6
3.2 打印消息
object SimpleStreamingApp {
def main(args: Array[String]) {
val ssc = new StreamingContext("local[2]", "First Streaming App", Seconds(10))
val stream = ssc.socketTextStream("localhost", 9999)
// here we simply print out the first few elements of each batch
stream.print()
ssc.start()
ssc.awaitTermination()
}
}
sbt run
Enter number: 2
3.3 流式分析
object StreamingAnalyticsApp {
def main(args: Array[String]) {
val ssc = new StreamingContext("local[2]", "First Streaming App", Seconds(10))
val stream = ssc.socketTextStream("localhost", 9999)
// create stream of events from raw text elements
val events = stream.map { record =>
val event = record.split(",")
(event(0), event(1), event(2))
}
/*
We compute and print out stats for each batch.
Since each batch is an RDD, we call forEeachRDD on the DStream, and apply the usual RDD functions
we used in Chapter 1.
*/
events.foreachRDD { (rdd, time) =>
val numPurchases = rdd.count()
val uniqueUsers = rdd.map { case (user, _, _) => user }.distinct().count()
val totalRevenue = rdd.map { case (_, _, price) => price.toDouble }.sum()
val productsByPopularity = rdd
.map { case (user, product, price) => (product, 1) }
.reduceByKey(_ + _)
.collect()
.sortBy(-_._2)
val mostPopular = productsByPopularity(0)
val formatter = new SimpleDateFormat
val dateStr = formatter.format(new Date(time.milliseconds))
println(s"== Batch start time: $dateStr ==")
println("Total purchases: " + numPurchases)
println("Unique users: " + uniqueUsers)
println("Total revenue: " + totalRevenue)
println("Most popular product: %s with %d purchases".format(mostPopular._1, mostPopular._2))
}
// start the context
ssc.start()
ssc.awaitTermination()
}
}
sbt run
Enter number: 4
3.4 有状态的流计算
object StreamingStateApp {
import org.apache.spark.streaming.StreamingContext._
def updateState(prices: Seq[(String, Double)], currentTotal: Option[(Int, Double)]) = {
val currentRevenue = prices.map(_._2).sum
val currentNumberPurchases = prices.size
val state = currentTotal.getOrElse((0, 0.0))
Some((currentNumberPurchases + state._1, currentRevenue + state._2))
}
def main(args: Array[String]) {
val ssc = new StreamingContext("local[2]", "First Streaming App", Seconds(10))
// for stateful operations, we need to set a checkpoint location
ssc.checkpoint("/tmp/sparkstreaming/")
val stream = ssc.socketTextStream("localhost", 9999)
// create stream of events from raw text elements
val events = stream.map { record =>
val event = record.split(",")
(event(0), event(1), event(2).toDouble)
}
val users = events.map { case (user, product, price) => (user, (product, price)) }
val revenuePerUser = users.updateStateByKey(updateState)
revenuePerUser.print()
// start the context
ssc.start()
ssc.awaitTermination()
}
}
sbt run
Enter number: 7
4 线性流回归
线性回归StreamingLinearRegressionWithSGD
- trainOn
- predictOn
4.1 流数据生成器
object StreamingModelProducer {
import breeze.linalg._
def main(args: Array[String]) {
// Maximum number of events per second
val MaxEvents = 100
val NumFeatures = 100
val random = new Random()
/** Function to generate a normally distributed dense vector */
def generateRandomArray(n: Int) = Array.tabulate(n)(_ => random.nextGaussian())
// Generate a fixed random model weight vector
val w = new DenseVector(generateRandomArray(NumFeatures))
val intercept = random.nextGaussian() * 10
/** Generate a number of random product events */
def generateNoisyData(n: Int) = {
(1 to n).map { i =>
val x = new DenseVector(generateRandomArray(NumFeatures))
val y: Double = w.dot(x)
val noisy = y + intercept //+ 0.1 * random.nextGaussian()
(noisy, x)
}
}
// create a network producer
val listener = new ServerSocket(9999)
println("Listening on port: 9999")
while (true) {
val socket = listener.accept()
new Thread() {
override def run = {
println("Got client connected from: " + socket.getInetAddress)
val out = new PrintWriter(socket.getOutputStream(), true)
while (true) {
Thread.sleep(1000)
val num = random.nextInt(MaxEvents)
val data = generateNoisyData(num)
data.foreach { case (y, x) =>
val xStr = x.data.mkString(",")
val eventStr = s"$y\t$xStr"
out.write(eventStr)
out.write("\n")
}
out.flush()
println(s"Created $num events...")
}
socket.close()
}
}.start()
}
}
}
sbt run
Enter number: 5
4.2 流回归模型
object SimpleStreamingModel {
def main(args: Array[String]) {
val ssc = new StreamingContext("local[2]", "First Streaming App", Seconds(10))
val stream = ssc.socketTextStream("localhost", 9999)
val NumFeatures = 100
val zeroVector = DenseVector.zeros[Double](NumFeatures)
val model = new StreamingLinearRegressionWithSGD()
.setInitialWeights(Vectors.dense(zeroVector.data))
.setNumIterations(1)
.setStepSize(0.01)
// create a stream of labeled points
val labeledStream: DStream[LabeledPoint] = stream.map { event =>
val split = event.split("\t")
val y = split(0).toDouble
val features: Array[Double] = split(1).split(",").map(_.toDouble)
LabeledPoint(label = y, features = Vectors.dense(features))
}
// train and test model on the stream, and print predictions for illustrative purposes
model.trainOn(labeledStream)
//model.predictOn(labeledStream).print()
ssc.start()
ssc.awaitTermination()
}
}
sbt run
Enter number: 5
5 流K-均值
- K-均值聚类:StreamingKMeans
6 评估
object MonitoringStreamingModel {
def main(args: Array[String]) {
val ssc = new StreamingContext("local[2]", "First Streaming App", Seconds(10))
val stream = ssc.socketTextStream("localhost", 9999)
val NumFeatures = 100
val zeroVector = DenseVector.zeros[Double](NumFeatures)
val model1 = new StreamingLinearRegressionWithSGD()
.setInitialWeights(Vectors.dense(zeroVector.data))
.setNumIterations(1)
.setStepSize(0.01)
val model2 = new StreamingLinearRegressionWithSGD()
.setInitialWeights(Vectors.dense(zeroVector.data))
.setNumIterations(1)
.setStepSize(1.0)
// create a stream of labeled points
val labeledStream = stream.map { event =>
val split = event.split("\t")
val y = split(0).toDouble
val features = split(1).split(",").map(_.toDouble)
LabeledPoint(label = y, features = Vectors.dense(features))
}
// train both models on the same stream
model1.trainOn(labeledStream)
model2.trainOn(labeledStream)
// use transform to create a stream with model error rates
val predsAndTrue = labeledStream.transform { rdd =>
val latest1 = model1.latestModel()
val latest2 = model2.latestModel()
rdd.map { point =>
val pred1 = latest1.predict(point.features)
val pred2 = latest2.predict(point.features)
(pred1 - point.label, pred2 - point.label)
}
}
// print out the MSE and RMSE metrics for each model per batch
predsAndTrue.foreachRDD { (rdd, time) =>
val mse1 = rdd.map { case (err1, err2) => err1 * err1 }.mean()
val rmse1 = math.sqrt(mse1)
val mse2 = rdd.map { case (err1, err2) => err2 * err2 }.mean()
val rmse2 = math.sqrt(mse2)
println(
s"""
|-------------------------------------------
|Time: $time
|-------------------------------------------
""".stripMargin)
println(s"MSE current batch: Model 1: $mse1; Model 2: $mse2")
println(s"RMSE current batch: Model 1: $rmse1; Model 2: $rmse2")
println("...\n")
}
ssc.start()
ssc.awaitTermination()
}
}
sbt run
Enter number: 1
Spark机器学习9· 实时机器学习(scala with sbt)的更多相关文章
- Spark机器学习1·编程入门(scala/java/python)
Spark安装目录 /Users/erichan/Garden/spark-1.4.0-bin-hadoop2.6 基本测试 ./bin/run-example org.apache.spark.ex ...
- 【原】Learning Spark (Python版) 学习笔记(四)----Spark Sreaming与MLlib机器学习
本来这篇是准备5.15更的,但是上周一直在忙签证和工作的事,没时间就推迟了,现在终于有时间来写写Learning Spark最后一部分内容了. 第10-11 章主要讲的是Spark Streaming ...
- Spark Sreaming与MLlib机器学习
Spark Sreaming与MLlib机器学习 本来这篇是准备5.15更的,但是上周一直在忙签证和工作的事,没时间就推迟了,现在终于有时间来写写Learning Spark最后一部分内容了. 第10 ...
- 使用spark ml pipeline进行机器学习
一.关于spark ml pipeline与机器学习 一个典型的机器学习构建包含若干个过程 1.源数据ETL 2.数据预处理 3.特征选取 4.模型训练与验证 以上四个步骤可以抽象为一个包括多个步骤的 ...
- spark ml pipeline构建机器学习任务
一.关于spark ml pipeline与机器学习一个典型的机器学习构建包含若干个过程 1.源数据ETL 2.数据预处理 3.特征选取 4.模型训练与验证 以上四个步骤可以抽象为一个包括多个步骤的流 ...
- Spark集群 + Akka + Kafka + Scala 开发(3) : 开发一个Akka + Spark的应用
前言 在Spark集群 + Akka + Kafka + Scala 开发(1) : 配置开发环境中,我们已经部署好了一个Spark的开发环境. 在Spark集群 + Akka + Kafka + S ...
- 基于Spark环境对比Python和Scala语言利弊
在数据挖掘中,Python和Scala语言都是极受欢迎的,本文总结两种语言在Spark环境各自特点. 本文翻译自 https://www.dezyre.com/article/Scala-vs-Py ...
- 苏宁基于Spark Streaming的实时日志分析系统实践 Spark Streaming 在数据平台日志解析功能的应用
https://mp.weixin.qq.com/s/KPTM02-ICt72_7ZdRZIHBA 苏宁基于Spark Streaming的实时日志分析系统实践 原创: AI+落地实践 AI前线 20 ...
- Spark集群 + Akka + Kafka + Scala 开发(2) : 开发一个Spark应用
前言 在Spark集群 + Akka + Kafka + Scala 开发(1) : 配置开发环境,我们已经部署好了一个Spark的开发环境. 本文的目标是写一个Spark应用,并可以在集群中测试. ...
随机推荐
- MD5骨骼动画模型加载
前面我们分析了静态模型OBJ格式,桢动画模型MD2,这篇主要分析骨骼动画MD5的一些概念并且实现. 混合桢动画有计算简单,容易实现等优点,但是在需要比较细致的效果时,则需要更多的关键桢,每桢都添加相同 ...
- 《C++ Primer Plus》第2章 开始学习C++ 学习笔记
C++程序由一个或多个被称为函数的模块组成.程序从main()函数(全部小写)开始执行,因此该函数必不可少.函数由函数头和函数体组成.函数头指出函数的返回值(如果有的话)的类型和函数期望通过参数传递给 ...
- MVC模式 与 Model2模型 介绍
Model1回顾 MVC模式:MVC(Model.View.Controller)是软件开发过程中比较流行的设计思想.旨在分离模型.控制.师徒.是一种分层思想的体现. Model2简介Java Web ...
- kubernetes使用中遇到的坑
随着kubernetes的发展现在使用的范围越来越广,在使用过程中碰到问题是避免不了的,有些时候一些坑能提前避免是最好的,下面我做一个小记录,把我们生产环境中遇到的坑总结下,方便后面查询同时也方便各位 ...
- Go语言 关键字:defer
defer和go一样都是Go语言提供的关键字.defer用于资源的释放,会在函数返回之前进行调用.一般采用如下模式: f,err := os.Open(filename) if err != nil ...
- Oracle 删除表
drop table books;的指令会将表放到回收站里, 用 flashback table "BIN$1Oiy3qm/QJubov1BwBUOgw==$0" to befor ...
- nodejs(二)
nodejs第二章节 回调函数 Node.js 异步编程的直接体现就是回调 异步编程依托于回调来实现 例子1:http服务器回调 var http = require(“http”);//引入一个ht ...
- LeetCode_链表操作1—Swap Nodes in Pairs
Given a linked list, swap every two adjacent nodes and return its head. For example, Given 1->2-& ...
- spring常见注解说明
1. @ActiveProfiles("test") 我理解这个注解的主要用途是区分不同的环境.一般公司开发一个项目时,会区分测试环境.生产环境等.添加该注解,说明读取的profi ...
- 为golang程序使用pprof远程查看httpserver运行堆栈,cpu耗时等信息
pprof是个神马玩意儿? pprof - manual page for pprof (part of gperftools) 是gperftools工具的一部分 gperftools又是啥? Th ...