先列参考文献:

Spark Streaming + Kafka Integration Guide (Kafka broker version 0.10.0 or higher):http://spark.apache.org/docs/2.2.0/streaming-kafka-0-10-integration.html

kafka(Java Client端Producer API):http://kafka.apache.org/documentation/#producerapi

版本:

spark:2.1.1
scala:2.11.12
kafka运行版本:2.3.0
spark-streaming-kafka-0-10_2.11:2.2.0

开发环境:

  3台虚拟机部署kafka,域名分别为coo1、coo2、coo3,部署版本如上,zookeeper版本3.4.7

  在kafka上创建topic:xzrz,replica为3,partition为4;

./kafka-topics.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --create --topic xzrz --replication-factor 3 --partitions 4

  准备代码环境:

  一个是Java端的kafka发送端:KafkaSender.java

  另一个是scale端的spark-streaming-kafka消费端,KafkaStreaming.scala

kafka发送端:

  maven配置:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>kafkaTest</groupId>
<artifactId>kafkaTest</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.3.0</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.13-beta-2</version>
<scope>compile</scope>
</dependency>
</dependencies>
</project>
KafkaSender.java
 package gjm;

 import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.junit.Test; import java.util.Properties;
import java.util.concurrent.ExecutionException; public class KafkaSender {
@Test
public void producer() throws InterruptedException, ExecutionException {
Properties props = new Properties();
props.put("key.serializer", "org.apache.kafka.common.serialization.IntegerSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "coo1:9092,coo2:9092,coo3:9092");
// props.put(ProducerConfig.BATCH_SIZE_CONFIG,"1024");
// props.put(ProducerConfig.BUFFER_MEMORY_CONFIG,"0");
//配置kafka语义exacts once语义
props.put("acks", "all");
props.put("enable.idempotence", "true");
Producer<Integer, String> kafkaProducer = new KafkaProducer<Integer, String>(props);
for (int j = 0; j < 1; j++)
for (int i = 0; i < 100; i++) {
ProducerRecord<Integer, String> message = new ProducerRecord<Integer, String>("xzrz", "{wo|2019-12-12|1|2|0|5}");
kafkaProducer.send(message);
}
//这个flush和close一定要写,类似于流操作
//因为kafka自带betch和buffer,如果没有这两行代码,一是浪费资源,二是有可能消息没有发送到kafka中,依旧保留在本地betch中
kafkaProducer.flush();
kafkaProducer.close();
}
}

kafka消费端-->sparkstreaming-kafka-->KafkaStreaming.scala代码:

  maven:

 <?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <groupId>sparkMVN</groupId>
<artifactId>sparkMVN</artifactId>
<version>1.0-SNAPSHOT</version> <properties>
<spark.version>2.1.1</spark.version>
<hadoop.version>2.7.3</hadoop.version>
<hbase.version>0.98.17-hadoop2</hbase.version>
</properties>
<dependencies> <dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
</dependency> <dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>${spark.version}</version>
<!--这行在local模式中,一定要注销,否则会导致找不到spark context类的异常-->
<!--<scope>provided</scope>-->
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.2.0</version>
</dependency> <!-- hadoop -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>${hadoop.version}</version>
</dependency> <!--hbase-->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>${hbase.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>${hbase.version}</version>
</dependency>
</dependencies>
</project>
KafkaStreaming.scala代码:
 package gjm.sparkDemos

 import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.{CanCommitOffsets, HasOffsetRanges, KafkaUtils}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}
import org.slf4j.LoggerFactory object KafkaStreaming {
def main(args: Array[String]): Unit = {
val LOG = LoggerFactory.getLogger(KafkaStreaming.getClass)
LOG.info("Streaming start----->") val conf = new SparkConf().setMaster("local[6]")//这里设置消费kafka的线程数为6,看看会有什么情况
.setAppName("KafkaStreaming")
val sc = new SparkContext(conf)
val ssc = new StreamingContext(sc, Seconds(3))
val topics = Array("xzrz")
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "coo1:9092,coo2:9092,coo3:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "fwjkcx",
"auto.offset.reset" -> "earliest",
"enable.auto.commit" -> (false: java.lang.Boolean)
// "heartbeat.interval.ms" -> (90000: java.lang.Integer),
// "session.timeout.ms" -> (120000: java.lang.Integer),
// "group.max.session.timeout.ms" -> (120000: java.lang.Integer),
// "request.timeout.ms" -> (130000: java.lang.Integer),
// "fetch.max.wait.ms" -> (120000: java.lang.Integer)
) val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
)
LOG.info("Streaming had Created----->")
LOG.info("Streaming Consuming msg----->")
stream.foreachRDD { rdd =>
val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
rdd.foreachPartition(recordIt => {
for (record <- recordIt) {
LOG.info("Message recode info : Topics-->{},partition-->{}, checkNum-->{}, offset-->{}, value-->{}", record.topic(), record.partition().toString, record.checksum().toString, record.offset().toString, record.value())
}
})
// some time later, after outputs have completed
stream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
}
ssc.start()
ssc.awaitTermination()
}
}

验证测试:

1、使用发送端发送100条消息;
2、启动kafka自带的consumer消费端,group id为test;
sh kafka-console-consumer.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --topic xzrz --from-beginning --group test
3、启动spark-stream-kafka,在代码中已经设置流的间隔时间为每3s一次;
4、使用kafka自带的group消费情况查询消费情况:
 ./kafka-consumer-groups.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --describe --group test
./kafka-consumer-groups.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --describe --group fwjkcx
结果:
1、首先是测试的test消费端:消费情况
[root@coo3 bin]# ./kafka-consumer-groups.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --describe --group test

GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                     HOST            CLIENT-ID
test xzrz 0 25 25 0 consumer-1-4adfdb85-45ef-40a5-9127-7bb6239e0e29 /192.168.0.217 consumer-1
test xzrz 1 25 25 0 consumer-1-4adfdb85-45ef-40a5-9127-7bb6239e0e29 /192.168.0.217 consumer-1
test xzrz 2 25 25 0 consumer-1-4adfdb85-45ef-40a5-9127-7bb6239e0e29 /192.168.0.217 consumer-1
test xzrz 3 25 25 0 consumer-1-4adfdb85-45ef-40a5-9127-7bb6239e0e29 /192.168.0.217 consumer-1
观察发现,4个分区同一个消费者线程,一共消费了100条。
2、spark-streaming的消费端:消费情况
[root@coo3 bin]# ./kafka-consumer-groups.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --describe --group fwjkcx

GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                     HOST            CLIENT-ID
fwjkcx xzrz 0 25 25 0 consumer-1-0cca92be-5970-4030-abd1-b8552dea9718 /192.168.0.60 consumer-1
fwjkcx xzrz 1 25 25 0 consumer-1-0cca92be-5970-4030-abd1-b8552dea9718 /192.168.0.60 consumer-1
fwjkcx xzrz 2 25 25 0 consumer-1-0cca92be-5970-4030-abd1-b8552dea9718 /192.168.0.60 consumer-1
fwjkcx xzrz 3 25 25 0 consumer-1-0cca92be-5970-4030-abd1-b8552dea9718 /192.168.0.60 consumer-1
观察发现,4个分区每个分区消费25条,符合正常认知
3、多增加一个实验,现在将spart-stream的local数量改为3,更改消费者组为fwjkcx01,观察消费情况
[root@coo3 bin]# ./kafka-consumer-groups.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --describe --group fwjkcx01

GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                     HOST            CLIENT-ID
fwjkcx01 xzrz 0 25 25 0 consumer-1-03542086-c95d-41c2-b199-24a158708b65 /192.168.0.60 consumer-1
fwjkcx01 xzrz 1 25 25 0 consumer-1-03542086-c95d-41c2-b199-24a158708b65 /192.168.0.60 consumer-1
fwjkcx01 xzrz 2 25 25 0 consumer-1-03542086-c95d-41c2-b199-24a158708b65 /192.168.0.60 consumer-1
fwjkcx01 xzrz 3 25 25 0 consumer-1-03542086-c95d-41c2-b199-24a158708b65 /192.168.0.60 consumer-1
发现仍然是4个线程在消费,所以在local位置指定线程数量根本不生效。
4、这时候再发1000条消息,观察group:fwjkcx的消费情况
[root@coo3 bin]# ./kafka-consumer-groups.sh --bootstrap-server coo3:9092,coo2:9092,coo1:9092 --describe --group fwjkcx

GROUP           TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                     HOST            CLIENT-ID
fwjkcx xzrz 0 275 275 0 consumer-1-fc238353-1b55-4efa-9c4f-54580ed81b0e /192.168.0.60 consumer-1
fwjkcx xzrz 1 275 275 0 consumer-1-fc238353-1b55-4efa-9c4f-54580ed81b0e /192.168.0.60 consumer-1
fwjkcx xzrz 2 275 275 0 consumer-1-fc238353-1b55-4efa-9c4f-54580ed81b0e /192.168.0.60 consumer-1
fwjkcx xzrz 3 275 275 0 consumer-1-fc238353-1b55-4efa-9c4f-54580ed81b0e /192.168.0.60 consumer-1
一切正常。

Spark-stream,kafka结合的更多相关文章

  1. Spark踩坑记——Spark Streaming+Kafka

    [TOC] 前言 在WeTest舆情项目中,需要对每天千万级的游戏评论信息进行词频统计,在生产者一端,我们将数据按照每天的拉取时间存入了Kafka当中,而在消费者一端,我们利用了spark strea ...

  2. Spark Streaming+Kafka

    Spark Streaming+Kafka 前言 在WeTest舆情项目中,需要对每天千万级的游戏评论信息进行词频统计,在生产者一端,我们将数据按照每天的拉取时间存入了Kafka当中,而在消费者一端, ...

  3. spark streaming kafka example

    // scalastyle:off println package org.apache.spark.examples.streaming import kafka.serializer.String ...

  4. spark与kafka集成进行实时 nginx代理 这种sdk埋点 原生日志实时解析 处理

    日志格式202.108.16.254^A1546795482.600^A/cntv.gif?appId=3&areaId=8213&srcContId=2535575&area ...

  5. spark streaming - kafka updateStateByKey 统计用户消费金额

    场景 餐厅老板想要统计每个用户来他的店里总共消费了多少金额,我们可以使用updateStateByKey来实现 从kafka接收用户消费json数据,统计每分钟用户的消费情况,并且统计所有时间所有用户 ...

  6. Spark Streaming + Kafka整合(Kafka broker版本0.8.2.1+)

    这篇博客是基于Spark Streaming整合Kafka-0.8.2.1官方文档. 本文主要讲解了Spark Streaming如何从Kafka接收数据.Spark Streaming从Kafka接 ...

  7. 进行Spark,Kafka针对Kerberos相关配置

    1. 提交任务的命令 spark-submit \--class <classname> \--master yarn \--deploy-mode client \--executor- ...

  8. 本机spark 消费kafka失败(无法连接)

    本机spark 消费kafka失败(无法连接) 终端也不报错 就特么不消费:  但是用console的consumer  却可以 经过各种改版本 ,测试配置,最后发现 只要注释掉 kafka 配置se ...

  9. 【Spark】Spark Streaming + Kafka direct 的 offset 存入Zookeeper并重用

    Spark Streaming + Kafka direct 的 offset 存入Zookeeper并重用 streaming offset设置_百度搜索 将 Spark Streaming + K ...

  10. spark读取 kafka nginx网站日志消息 并写入HDFS中(转)

    原文链接:spark读取 kafka nginx网站日志消息 并写入HDFS中 spark 版本为1.0 kafka 版本为0.8 首先来看看kafka的架构图 详细了解请参考官方 我这边有三台机器用 ...

随机推荐

  1. node的stream

    stream在Unix系统中是个标准的概念. In computer programming, standard streams are preconnected input and output c ...

  2. Codeforces1157B(B题)Long Number

    B. Long Number You are given a long decimal number aa consisting of nn digits from 11 to 99. You als ...

  3. 第三篇:ASR(Automatic Speech Recognition)语音识别

    ASR(Automatic Speech Recognition)语音识别: 百度语音--语音识别--python SDK文档: https://ai.baidu.com/docs#/ASR-Onli ...

  4. 你真的了解负载均衡中间件nginx吗?

    前言 nginx可所谓是如今最好用的软件级别的负载均衡了.通过nginx的高性能,并发能力强,占用内存下的特点,可以搭建高性能的代理服务.同时nginx还能作为web服务器,反向代理,动静分离服务器. ...

  5. UVALive8518 Sum of xor sum

    题目链接:https://vjudge.net/problem/UVALive-8518 题目大意: 给定一个长度为 $N$ 的数字序列 $A$,进行 $Q$ 次询问,每次询问 $[L,R]$,需要回 ...

  6. 量子纠错码——Clifford group

    Clifford code Clifford group是什么? 简单的公式来表达,就是 \(Cl_{n}=\left\{U: U P_{n} U^{\dagger} \in P_{n}\right\ ...

  7. MyBatis通过注解方式批量添加、修改、删除

    唯能极于情,故能极于剑 注: 本文转载于:CodeCow · 程序牛 的个人博客:http://www.codecow.cn/ 一.数据库实体DO public class User implemen ...

  8. Redis学习笔记(1)

    一.NoSQL基础知识 1. NoSQL概念 NoSQL(NoSQL = Not Only SQL ),意即“不仅仅是SQL”,泛指非关系型的数据库.随着互联网web2.0网站的兴起,传统的关系数据库 ...

  9. spring-kafka之KafkaListener注解深入解读

    简介 Kafka目前主要作为一个分布式的发布订阅式的消息系统使用,也是目前最流行的消息队列系统之一.因此,也越来越多的框架对kafka做了集成,比如本文将要说到的spring-kafka. Kafka ...

  10. idea创建maven项目慢的原因以及解决方案

    问题分析;在idea中maven项目所依赖的jar包,默认是从中央仓库直接下载jar包,不管jar包是否在本地仓库存在,所以导致idea创建maven项目速度慢,那么要解决这个问题,那么将idea设置 ...