spark与flume整合
spark-streaming与flume整合 push
- package cn.my.sparkStream
- import org.apache.spark.SparkConf
- import org.apache.spark.storage.StorageLevel
- import org.apache.spark.streaming._
- import org.apache.spark.streaming.flume._
- /**
- */
- object SparkFlumePush {
- def main(args: Array[String]) {
- if (args.length < ) {
- System.err.println(
- "Usage: FlumeEventCount <host> <port>")
- System.exit()
- }
- LogLevel.setStreamingLogLevels()
- val Array(host, port) = args
- val batchInterval = Milliseconds()
- // Create the context and set the batch size
- val sparkConf = new SparkConf().setAppName("FlumeEventCount").setMaster("local[2]")
- val ssc = new StreamingContext(sparkConf, batchInterval)
- // Create a flume stream
- val stream = FlumeUtils.createStream(ssc, host, port.toInt, StorageLevel.MEMORY_ONLY_SER_2)
- // Print out the count of events received from this server in each batch
- stream.count().map(cnt => "Received " + cnt + " flume events.").print()
- //拿到消息中的event,从event中拿出body,body是真正的消息体
- stream.flatMap(t=>{new String(t.event.getBody.array()).split(" ")}).map((_,)).reduceByKey(_+_).print
- ssc.start()
- ssc.awaitTermination()
- }
- }
- package cn.my.sparkStream
- import java.net.InetSocketAddress
- import org.apache.spark.SparkConf
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming._
import org.apache.spark.streaming.flume._- /**
*
*/
object SparkFlumePull {
def main(args: Array[String]) {
if (args.length < 2) {
System.err.println(
"Usage: FlumeEventCount <host> <port>")
System.exit(1)
}
LogLevel.setStreamingLogLevels()
val Array(host, port) = args
val batchInterval = Milliseconds(2000)
// Create the context and set the batch size
val sparkConf = new SparkConf().setAppName("FlumeEventCount").setMaster("local[2]")
val ssc = new StreamingContext(sparkConf, batchInterval)
// Create a flume stream- //val stream = FlumeUtils.createStream(ssc, host, port.toInt, StorageLevel.MEMORY_ONLY_SER_2)
// val flumeStream = FlumeUtils.createPollingStream(ssc, host, port.toInt)
/*
def createPollingStream(
jssc: JavaStreamingContext,
addresses: Array[InetSocketAddress],
storageLevel: StorageLevel
):
*/
//当sink有多个的时候
val flumesinklist = Array[InetSocketAddress](new InetSocketAddress("mini1", 8888))
val flumeStream = FlumeUtils.createPollingStream(ssc, flumesinklist, StorageLevel.MEMORY_ONLY_2)- flumeStream.count().map(cnt => "Received " + cnt + " flume events.").print()
flumeStream.flatMap(t => {
new String(t.event.getBody.array()).split(" ")
}).map((_, 1)).reduceByKey(_ + _).print()- // Print out the count of events received from this server in each batch
//stream.count().map(cnt => "Received " + cnt + " flume events.").print()
//拿到消息中的event,从event中拿出body,body是真正的消息体
//stream.flatMap(t=>{new String(t.event.getBody.array()).split(" ")}).map((_,1)).reduceByKey(_+_).print- ssc.start()
ssc.awaitTermination()
}
}
http://spark.apache.org/docs/1.6.3/streaming-flume-integration.html
spark与flume整合的更多相关文章
- Spark Streaming + Flume整合官网文档阅读及运行示例
1,基于Flume的Push模式(Flume-style Push-based Approach) Flume被用于在Flume agents之间推送数据.在这种方式下,Spark Stre ...
- Flume整合Spark Streaming
Spark版本1.5.2,Flume版本:1.6 Flume agent配置文件:spool-8.51.conf agent.sources = source1 agent.channels = me ...
- <Spark Streaming><Flume><Integration>
Overview Flume:一个分布式的,可靠的,可用的服务,用于有效地收集.聚合.移动大规模日志数据 我们搭建一个flume + Spark Streaming的平台来从Flume获取数据,并处理 ...
- spark第十篇:Spark与Kafka整合
spark与kafka整合需要引入spark-streaming-kafka.jar,该jar根据kafka版本有2个分支,分别是spark-streaming-kafka-0-8和spark-str ...
- flume 整合 kafka
flume 整合 kafka: flume:高可用的,高可靠的,分布式的海量日志采集.聚合和传输的系统. kafka:分布式的流数据平台. flume 采集业务日志,发送到kafka 一. ...
- IDEA Spark Streaming Flume数据源 --解决无法转化为实际输入数据,及中文乱码(Scala)
需要三步: 1.shell:往 1234 端口写数据 nc localhost 1234 2.shell: 启动flume服务 cd /usr/local2/flume/bin ./flume-ng ...
- Spark Streaming + Kafka整合(Kafka broker版本0.8.2.1+)
这篇博客是基于Spark Streaming整合Kafka-0.8.2.1官方文档. 本文主要讲解了Spark Streaming如何从Kafka接收数据.Spark Streaming从Kafka接 ...
- 必读:Spark与kafka010整合
版权声明:本文为博主原创文章,未经博主同意不得转载. https://blog.csdn.net/rlnLo2pNEfx9c/article/details/79648890 SparkStreami ...
- Spark之 SparkSql整合hive
整合: 1,需要将hive-site.xml文件拷贝到Spark的conf目录下,这样就可以通过这个配置文件找到Hive的元数据以及数据存放位置. 2,如果Hive的元数据存放在Mysql中,我们还需 ...
随机推荐
- Linux生成高强度密码
在撰写,自动化脚本.往往需要添加账户及密码.如何自动化填写随机密码,有点意思.... 01.openssl生成密码 [root@mvp ~]# openssl rand -base 14Usage: ...
- Filezilla开源FTP传输工具
生于忧患,死于安乐!在进取中思考... 官网:https://filezilla-project.org/ #FileZilla截图 免费.开源的FTP链接工具! 云下载: http://pan.ba ...
- PHP开发安全问题
1.不相信表单 对于一般的Javascript前台验证,由于无法得知用户的行为,例如关闭了浏览器的javascript引擎,这样通过POST恶意数据到服务器.需要在服务器端进行验证,对每个php脚本验 ...
- ubuntu(14.04) 安装ssh,并使用root用户登录
1.apt-get install openssh-server 2.修改ssh的配置文件/etc/ssh/sshd_config 注释掉以前的:PermitRootLogin without-pas ...
- 透析Java本质-谁创建了对象,this是什么
是构造方法创建的对象吗 package com.java.essence_36; import java.util.ArrayList; import java.util.List; /** * Cr ...
- django1.8forms读书笔记
一.HttpRequest对象的一些属性或方法 request.path,The full path, not including the domain but including the leadi ...
- C 多级指针
C多级指针 *p -----> &p1 *(*p) ----->*(&p1) = &p ...
- Spring Cloud构建微服务架构(四)分布式配置中心(续)
先来回顾一下,在前文中我们完成了什么: 构建了config-server,连接到Git仓库 在Git上创建了一个config-repo目录,用来存储配置信息 构建了config-client,来获取G ...
- ASP.NET MVC 笔记
(从今天开始,还是换回默认的代码高亮插件吧...话说此篇仅供个人遗忘后查阅,木有详尽解释...) 1.Controller中的所有Action方法不限制返回值类型,返回值应该至少可以被ToString ...
- TCP通信的三次握手和四次撒手的详细流程(顿悟)
TCP(Transmission Control Protocol) 传输控制协议 三次握手 TCP是主机对主机层的传输控制协议,提供可靠的连接服务,采用三次握手确认建立一个连接: 位码即tcp标志位 ...