Spark Streaming接收Kafka数据存储到Hbase
Spark Streaming接收Kafka数据存储到Hbase
主要参考了这篇文章https://yq.aliyun.com/articles/60712([点我])(https://yq.aliyun.com/articles/60712), 不过这篇文章使用的spark貌似是spark1.x的。我这里主要是改为了spark2.x的方式
kafka生产数据
闲话少叙,直接上代码:
- import java.util.{Properties, UUID}
- import org.apache.kafka.clients.producer.{KafkaProducer, ProducerRecord}
- import org.apache.kafka.common.serialization.StringSerializer
- import scala.util.Random
- object KafkaProducerTest {
- def main(args: Array[String]): Unit = {
- val rnd = new Random()
- // val topics = "world"
- val topics = "test"
- val brokers = "localhost:9092"
- val props = new Properties()
- props.put("delete.topic.enable", "true")
- props.put("key.serializer", classOf[StringSerializer])
- // props.put("value.serializer", "org.apache.kafka.common.serialization.StringDeserializer")
- props.put("value.serializer", classOf[StringSerializer])
- props.put("bootstrap.servers", brokers)
- //props.put("batch.num.messages","10");//props.put("batch.num.messages","10");
- //props.put("queue.buffering.max.messages", "20");
- //linger.ms should be 0~100 ms
- props.put("linger.ms", "50")
- //props.put("block.on.buffer.full", "true");
- //props.put("max.block.ms", "100000");
- //batch.size and buffer.memory should be changed with "the kafka message size and message sending speed"
- props.put("batch.size", "16384")
- props.put("buffer.memory", "1638400")
- props.put("queue.buffering.max.messages", "1000000")
- props.put("queue.enqueue.timeout.ms", "20000000")
- props.put("producer.type", "sync")
- val producer = new KafkaProducer[String,String](props)
- for(i <- 1001 to 2000){
- val key = UUID.randomUUID().toString.substring(0,5)
- val value = "fly_" + i + "_" + key
- producer.send(new ProducerRecord[String, String](topics,key, value))//.get()
- }
- producer.flush()
- }
- }
生产的数据格式为(key,value) = (uuid, fly_i_key) 的形式
spark streaming 读取kafka并保存到hbase
当kafka里面有数据后,使用spark streaming 读取,并存。直接上代码:
- import java.util.UUID
- import org.apache.hadoop.hbase.HBaseConfiguration
- import org.apache.hadoop.hbase.client.{Mutation, Put}
- import org.apache.hadoop.hbase.io.ImmutableBytesWritable
- import org.apache.hadoop.hbase.mapreduce.TableOutputFormat
- import org.apache.hadoop.hbase.util.Bytes
- import org.apache.hadoop.mapred.JobConf
- import org.apache.hadoop.mapreduce.OutputFormat
- import org.apache.kafka.clients.consumer.ConsumerRecord
- import org.apache.kafka.common.serialization.StringDeserializer
- import org.apache.spark.rdd.RDD
- import org.apache.spark.sql.SparkSession
- import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
- import org.apache.spark.streaming.kafka010.KafkaUtils
- import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
- import org.apache.spark.streaming.{Seconds, StreamingContext}
- /**
- * spark streaming 写入到hbase
- * Sparkstreaming读取Kafka消息再结合SparkSQL,将结果保存到HBase
- */
- object OBDSQL {
- case class Person(name: String, age: Int, key: String)
- def main(args: Array[String]): Unit = {
- val spark = SparkSession
- .builder()
- .appName("sparkSql")
- .master("local[4]")
- .getOrCreate()
- val sc = spark.sparkContext
- val ssc = new StreamingContext(sc, Seconds(5))
- val topics = Array("test")
- val kafkaParams = Map(
- "bootstrap.servers" -> "localhost:9092,anotherhost:9092",
- "key.deserializer" -> classOf[StringDeserializer],
- "value.deserializer" -> classOf[StringDeserializer],
- // "group.id" -> "use_a_separate_group_id_for_each_stream",
- "group.id" -> "use_a_separate_group_id_for_each_stream_fly",
- // "auto.offset.reset" -> "latest",
- "auto.offset.reset" -> "earliest",
- // "auto.offset.reset" -> "none",
- "enable.auto.commit" -> (false: java.lang.Boolean)
- )
- val lines = KafkaUtils.createDirectStream[String, String](
- ssc,
- PreferConsistent,
- Subscribe[String, String](topics, kafkaParams)
- )
- // lines.map(record => (record.key, record.value)).print()
- // lines.map(record => record.value.split("_")).map(x=> (x(0),x(1), x(2))).print()
- lines.foreachRDD((rdd: RDD[ConsumerRecord[String, String]]) => {
- import spark.implicits._
- if (!rdd.isEmpty()) {
- // temp table
- rdd.map(_.value.split("_")).map(p => Person(p(0), p(1).trim.toInt, p(2))).toDF.createOrReplaceTempView("temp")
- // use spark sql
- val rs = spark.sql("select * from temp")
- // create hbase conf
- val hconf = HBaseConfiguration.create
- hconf.set("hbase.zookeeper.quorum", "localhost"); //ZKFC
- hconf.set("hbase.zookeeper.property.clientPort", "2181")
- hconf.set("hbase.defaults.for.version.skip", "true")
- hconf.set(TableOutputFormat.OUTPUT_TABLE, "t1") // t1是表名, 表里面有一个列簇 cf1
- hconf.setClass("mapreduce.job.outputformat.class", classOf[TableOutputFormat[String]], classOf[OutputFormat[String, Mutation]])
- val jobConf = new JobConf(hconf)
- // convert every line to hbase lines
- rs.rdd.map(line => {
- val put = new Put(Bytes.toBytes(UUID.randomUUID().toString.substring(0, 9)))
- put.addColumn(Bytes.toBytes("cf1")
- , Bytes.toBytes("name")
- , Bytes.toBytes(line.get(0).toString)
- )
- put.addColumn(Bytes.toBytes("cf1")
- , Bytes.toBytes("age")
- , Bytes.toBytes(line.get(1).toString)
- )
- put.addColumn(Bytes.toBytes("cf1")
- , Bytes.toBytes("key")
- , Bytes.toBytes(line.get(2).toString)
- )
- (new ImmutableBytesWritable, put)
- }).saveAsNewAPIHadoopDataset(jobConf)
- }
- })
- lines.map(record => record.value.split("_")).map(x=> (x(0),x(1), x(2))).print()
- ssc start()
- ssc awaitTermination()
- }
- }
Spark Streaming接收Kafka数据存储到Hbase的更多相关文章
- demo1 spark streaming 接收 kafka 数据java代码WordCount示例
1. 首先启动zookeeper windows上的安装见zk 02之 Windows安装和使用zookeeper 启动后见: 2. 启动kafka windows的安装kafka见Windows上搭 ...
- spark streaming 接收 kafka 数据java代码WordCount示例
http://www.cnblogs.com/gaopeng527/p/4959633.html
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十二)Spark Streaming接收流数据及使用窗口函数
官网文档:<http://spark.apache.org/docs/latest/streaming-programming-guide.html#a-quick-example> Sp ...
- spark streaming 接收kafka消息之五 -- spark streaming 和 kafka 的对接总结
Spark streaming 和kafka 处理确保消息不丢失的总结 接入kafka 我们前面的1到4 都在说 spark streaming 接入 kafka 消息的事情.讲了两种接入方式,以及s ...
- spark streaming 接收kafka消息之四 -- 运行在 worker 上的 receiver
使用分布式receiver来获取数据使用 WAL 来实现 exactly-once 操作: conf.set("spark.streaming.receiver.writeAheadLog. ...
- spark streaming 接收kafka消息之一 -- 两种接收方式
源码分析的spark版本是1.6. 首先,先看一下 org.apache.spark.streaming.dstream.InputDStream 的 类说明: This is the abstrac ...
- spark streaming 接收kafka消息之二 -- 运行在driver端的receiver
先从源码来深入理解一下 DirectKafkaInputDStream 的将 kafka 作为输入流时,如何确保 exactly-once 语义. val stream: InputDStream[( ...
- spark streaming 接收kafka消息之三 -- kafka broker 如何处理 fetch 请求
首先看一下 KafkaServer 这个类的声明: Represents the lifecycle of a single Kafka broker. Handles all functionali ...
- Spark streaming消费Kafka的正确姿势
前言 在游戏项目中,需要对每天千万级的游戏评论信息进行词频统计,在生产者一端,我们将数据按照每天的拉取时间存入了Kafka当中,而在消费者一端,我们利用了spark streaming从kafka中不 ...
随机推荐
- TypeScript `unknown` 类型
unknown 字面理解和 any 其实没差,任何类型都可赋值给它,但有一点, Anything is assignable to unknown, but unknown isn't assigna ...
- 【BZOJ2565】最长双回文串 (Manacher算法)
题目: BZOJ2565 分析: 首先看到回文串,肯定能想到Manacher算法.下文中字符串\(s\)是输入的字符串\(str\)在Manacher算法中添加了字符'#'后的字符串 (构造方式如下) ...
- 命令框中oracle dmp文件的导入和导出(仅做个人备忘)
1.dmp文件导出 (全部)exp 用户名/密码 rows=y indexes=n compress=n buffer=65536 feedback=100000 file=F:\test.dmp ...
- unity之Rigidbody属性
Rigidbody属性 Mass表示物体的质量,数值类型为float,默认值为1.大部分物体的质量属性接近于0.1才符合日常生活感官感受,超过10 ,则失去了仿真效果. Drag表示平移阻力,其数值类 ...
- 服务器端 CentOS 下配置 JDK 和 Tonmcat 踩坑合集
一.配置 JDK 时,在 /etc/profile 文件下配置环境变量,添加 #java environment export JAVA_HOME=/usr/java/jdk- export CL ...
- SQl基本操作——try catch
begin try ... end try begin catch ... end catch
- 11-c++虚拟函数
虚拟函数 #include "stdio.h" class A{ public: void print() { printf("%s","this i ...
- php银联支付
简介 PHP银联支付 流程 1.注册 银联 - 技术开发平台和商户服务平台 https://open.unionpay.com 注意:注册时建议使用IE浏览器,之前注册时插件老是用不了,使用IE10以 ...
- centos7安装个人网盘owncloud
现在个人资料越来越重要,网络速度也已经满足日常需要,网盘已经是生活着存取个人数据不可缺少的工具. 下面在linxu centos7下面安装owncloud搭建自己私人网盘: 1.新建一个账号用来安装个 ...
- 解决windows64位系统上安装mysql-python报错
解决windows64位系统上安装mysql-python报错 2018年03月12日 13:08:24 一个CD包 阅读数:1231 版权声明:本文为博主原创文章,未经博主允许不得转载. ht ...