Spark Streaming no receivers彻底思考
数据接入Spark Streaming的二种方式:Receiver和no receivers方式
建议企业级采用no receivers方式开发Spark Streaming应用程序,好处:
1、更优秀的自由度控制
2、语义一致性
no receivers更符合数据读取和数据操作,Spark 计算框架底层有数据来源,如果只有direct直接操作数据来源则更天然。操作数据来源封装其一定是rdd级别的。
所以Spark 推出了自定义的rdd即Kafkardd,只是数据来源不同。
进入源码区:
注释基于Batch消费数据,首先确定开始和结束的offSet,特别强调语义一致性。
关键是metaData.broker.list,通过bootstrap.servers直接操作Kafka集群,操作kafka数据是一个offset范围。
/**
* A batch-oriented interface for
consuming from Kafka.
* Starting and ending offsets are
specified in advance,
* so that you can control exactly-once
semantics.
* @param kafkaParams Kafka <a href="http://kafka.apache.org/documentation.html#configuration">
* configuration parameters</a>. Requires "metadata.broker.list" or
"bootstrap.servers" to be set
* with Kafka broker(s) specified in
host1:port1,host2:port2 form.
* @param offsetRanges offset
ranges that define the Kafka data belonging to this RDD
* @param messageHandler function
for translating each message into the desired type
*/
private[kafka]
class KafkaRDD[
K:
ClassTag,
V:
ClassTag,
U <:
Decoder[_]: ClassTag,
T <:
Decoder[_]: ClassTag,
R:
ClassTag] private[spark] (
sc: SparkContext,
kafkaParams: Map[String, String],
val offsetRanges: Array[OffsetRange],
leaders: Map[TopicAndPartition, (String, Int)],
messageHandler: MessageAndMetadata[K, V] => R
) extends
RDD[R](sc, Nil) with Logging with
HasOffsetRanges {
你要直接访问Kafka中的数据需要自定义一个KafkaRDD,如果读取hBase上的数据
也必须自定义一个hBaseRDD。有一点必须定义接口HasOffsetRange,RDD天然的是一个
A List Partitions,基于kafka直接访问RDD时必须是HasOffsetRange类型,代表了
来自kafka topicAndParttion,其实力被HasOffsetRange Create创建,从fromOffset到untilOffset ,
分布式传输Offset数据时必须序列化。
/**
* Represents any object that has a
collection of [[OffsetRange]]s. This can be used to access the
* offset ranges in RDDs generated by the
direct Kafka DStream (see
* [[KafkaUtils.createDirectStream()]]).
* {{{
*
KafkaUtils.createDirectStream(...).foreachRDD { rdd =>
*
val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
*
...
*
}
* }}}
*/
trait HasOffsetRanges
{
def
offsetRanges: Array[OffsetRange]
}
/**
* Represents a range of offsets from a
single Kafka TopicAndPartition. Instances of this class
* can be created with `OffsetRange.create()`.
* @param topic Kafka topic name
* @param partition Kafka
partition id
* @param fromOffset Inclusive
starting offset
* @param untilOffset Exclusive
ending offset
*/
final class OffsetRange
private(
val
topic: String,
val partition:
Int,
val fromOffset:
Long,
val untilOffset:
Long) extends Serializable {
import
OffsetRange.OffsetRangeTuple
/**
Kafka TopicAndPartition object, for convenience */
def
topicAndPartition(): TopicAndPartition = TopicAndPartition(topic, partition)
/**
Number of messages this OffsetRange refers to */
def
count(): Long = untilOffset - fromOffset
Offset是消息偏移量,假设untilOffset是10万,fromOffset是5万,第10万条消息
和5万条消息,一般处理数据规模大小是以数据条数为单位。
创建一个offSetrange实例时可以确定从kafka集群partition中读取哪些topic,从
foreachrdd中可以获得当前rdd访问的所有分区数据。Batch Duration中产生的rdd的分区数据,这个是对元数据的控制。
在看getPartitions方法,offsetRanges指定了每个offsetrange从什么位置开始到什么位置结束。
override def getPartitions: Array[Partition] = {
offsetRanges.zipWithIndex.map { case (o, i)
=>
val
(host, port) = leaders(TopicAndPartition(o.topic, o.partition))
new
KafkaRDDPartition(i, o.topic, o.partition, o.fromOffset, o.untilOffset, host, port)
}.toArray
}
看KafkaRDDPartition类,会从传入的topic和partition及offset中获取kafka数据
/** @param topic kafka topic name
* @param partition kafka
partition id
* @param fromOffset inclusive
starting offset
* @param untilOffset exclusive
ending offset
* @param host preferred kafka
host, i.e. the leader at the time the rdd was created
* @param port preferred kafka
host's port
*/
private[kafka]
class KafkaRDDPartition(
val index:
Int,
val topic: String,
val partition: Int,
val fromOffset: Long,
val untilOffset: Long,
val host: String,
val port: Int
) extends Partition {
/** Number of messages this partition refers to */
def count(): Long =
untilOffset - fromOffset
}
Host port指定读取数据来源的kfakf机器。
看kafka rdd的compute计算每个数据分片,和rdd理念是一样的,每次迭代操作获取计算的rdd一部分。
操作KafkaRDDIterator和操作rdd分片是一样的,需要迭代数据分片:
override def compute(thePart: Partition, context: TaskContext): Iterator[R] = {
val part
= thePart.asInstanceOf[KafkaRDDPartition]
assert(part.fromOffset <=
part.untilOffset, errBeginAfterEnd(part))
if (part.fromOffset
== part.untilOffset) {
log.info(s"Beginning offset ${part.fromOffset} is the same as ending offset " +
s"skipping ${part.topic}
${part.partition}")
Iterator.empty
} else
{
new KafkaRDDIterator(part, context)
}
}
private class KafkaRDDIterator(
part: KafkaRDDPartition,
context: TaskContext) extends NextIterator[R] {
context.addTaskCompletionListener{
context => closeIfNeeded() }
log.info(s"Computing topic ${part.topic}, partition ${part.partition}
" +
s"offsets ${part.fromOffset} -> ${part.untilOffset}")
val kc = new KafkaCluster(kafkaParams)
val keyDecoder
= classTag[U].runtimeClass.getConstructor(classOf[VerifiableProperties])
.newInstance(kc.config.props)
.asInstanceOf[Decoder[K]]
val valueDecoder
= classTag[T].runtimeClass.getConstructor(classOf[VerifiableProperties])
.newInstance(kc.config.props)
.asInstanceOf[Decoder[V]]
val consumer
= connectLeader
var requestOffset
= part.fromOffset
var iter: Iterator[MessageAndOffset]
= null
// The idea is to use the provided preferred host, except
on task retry attempts,
// to minimize number of kafka metadata
requests
private def connectLeader: SimpleConsumer
= {
if (context.attemptNumber
> 0) {
kc.connectLeader(part.topic, part.partition).fold(
errs => throw new SparkException(
s"Couldn't connect to leader for topic ${part.topic} ${part.partition}: " +
errs.mkString("\n")),
consumer => consumer
)
} else {
kc.connect(part.host, part.port)
}
}
private def handleFetchErr(resp:
FetchResponse) {
if (resp.hasError)
{
val err
= resp.errorCode(part.topic, part.partition)
if (err
== ErrorMapping.LeaderNotAvailableCode
||
err == ErrorMapping.NotLeaderForPartitionCode) {
log.error(s"Lost leader for topic ${part.topic} partition ${part.partition}, " +
s" sleeping for ${kc.config.refreshLeaderBackoffMs}ms")
Thread.sleep(kc.config.refreshLeaderBackoffMs)
}
// Let normal rdd retry sort out reconnect attempts
throw ErrorMapping.exceptionFor(err)
}
}
关键的地方kafkaCluster对象时在kafkaUtils中直接创建了directStream,看下之前操作kafka代码发现传入的参数是上下文、 broker.List.topic.list参数:
构建时传入topics为Set,当然可以直接指定ranges,他从kafka集群直接创建了kafkaCluster和集群进行交互,从fromOffset获取数据具体的偏移量:
/**
* Create an input stream that directly
pulls messages from Kafka Brokers
* without using any receiver. This
stream can guarantee that each message
* from Kafka is included in
transformations exactly once (see points below).
*
* Points to note:
*
- No receivers: This stream does not use any receiver. It directly
queries Kafka
*
- Offsets: This does not use Zookeeper to store offsets. The consumed
offsets are tracked
*
by the stream itself. For interoperability with Kafka monitoring tools
that depend on
*
Zookeeper, you have to update Kafka/Zookeeper yourself from the
streaming application.
*
You can access the offsets used in each batch from the generated RDDs
(see
*
[[org.apache.spark.streaming.kafka.HasOffsetRanges]]).
*
- Failure Recovery: To recover from driver failures, you have to enable
checkpointing
*
in the [[StreamingContext]]. The information on consumed offset can be
*
recovered from the checkpoint. See the programming guide for details
(constraints, etc.).
*
- End-to-end semantics: This stream ensures that every records is
effectively received and
*
transformed exactly once, but gives no guarantees on whether the
transformed data are
*
outputted exactly once. For end-to-end exactly-once semantics, you have
to either ensure
*
that the output operation is idempotent, or use transactions to output
records atomically.
*
See the programming guide for more details.
*
* @param ssc StreamingContext
object
* @param kafkaParams Kafka <a href="http://kafka.apache.org/documentation.html#configuration">
*
configuration parameters</a>. Requires "metadata.broker.list" or
"bootstrap.servers"
*
to be set with Kafka broker(s) (NOT zookeeper servers), specified in
*
host1:port1,host2:port2 form.
*
If not starting from a checkpoint, "auto.offset.reset" may be
set to "largest" or "smallest"
*
to determine where the stream starts (defaults to "largest")
* @param topics Names of the
topics to consume
* @tparam K type of Kafka message
key
* @tparam V type of Kafka message
value
* @tparam KD type of Kafka message
key decoder
* @tparam VD type of Kafka
message value decoder
* @return DStream of (Kafka
message key, Kafka message value)
*/
def createDirectStream[
K: ClassTag,
V: ClassTag,
KD <: Decoder[K]:
ClassTag,
VD <: Decoder[V]:
ClassTag] (
ssc: StreamingContext,
kafkaParams:
Map[String, String],
topics:
Set[String]
): InputDStream[(K, V)] = {
val
messageHandler = (mmd: MessageAndMetadata[K, V]) => (mmd.key, mmd.message)
val
kc = new KafkaCluster(kafkaParams)
val
fromOffsets = getFromOffsets(kc, kafkaParams, topics)
new
DirectKafkaInputDStream[K, V, KD, VD, (K, V)](
ssc, kafkaParams, fromOffsets, messageHandler)
看下getFromOffsets方法:
private[kafka] def getFromOffsets(
kc: KafkaCluster,
kafkaParams: Map[String, String],
topics: Set[String]
): Map[TopicAndPartition, Long] = {
val reset
= kafkaParams.get("auto.offset.reset").map(_.toLowerCase)
val result
= for {
topicPartitions <-
kc.getPartitions(topics).right
leaderOffsets <- (if (reset == Some("smallest")) {
kc.getEarliestLeaderOffsets(topicPartitions)
} else {
kc.getLatestLeaderOffsets(topicPartitions)
}).right
} yield {
leaderOffsets.map { case (tp, lo) =>
(tp, lo.offset)
}
}
KafkaCluster.checkErrors(result)
}
如果不知道fromOffsets的话直接从配置中获取fromOffsets,创建kafka DirectKafkaInputDStream的时候会从kafka集群进行交互获得partition、offset信息,通过DirectKafkaInputDStream无论什么情况最后都会创建DirectKafkaInputDStream:
/**
*
A stream of {@link org.apache.spark.streaming.kafka.KafkaRDD}
where
* each given Kafka topic/partition
corresponds to an RDD partition.
* The spark configuration
spark.streaming.kafka.maxRatePerPartition gives the maximum number
*
of messages
* per second that each '''partition''' will accept.
* Starting offsets are specified in
advance,
* and this DStream is not responsible
for committing offsets,
* so that you can control exactly-once
semantics.
* For an easy interface to Kafka-managed
offsets,
*
see {@link org.apache.spark.streaming.kafka.KafkaCluster}
* @param kafkaParams Kafka <a href="http://kafka.apache.org/documentation.html#configuration">
* configuration parameters</a>.
*
Requires "metadata.broker.list" or
"bootstrap.servers" to be set with Kafka broker(s),
*
NOT zookeeper servers, specified in host1:port1,host2:port2 form.
* @param fromOffsets per-topic/partition
Kafka offsets defining the (inclusive)
*
starting point of the stream
* @param messageHandler function
for translating each message into the desired type
*/
private[streaming]
class DirectKafkaInputDStream[
K:
ClassTag,
V:
ClassTag,
U <:
Decoder[K]: ClassTag,
T <:
Decoder[V]: ClassTag,
R:
ClassTag](
ssc_ : StreamingContext,
val kafkaParams: Map[String, String],
val fromOffsets: Map[TopicAndPartition, Long],
messageHandler: MessageAndMetadata[K, V] => R
) extends
InputDStream[R](ssc_)
with Logging {
val maxRetries
= context.sparkContext.getConf.getInt(
"spark.streaming.kafka.maxRetries", 1)
DirectKafkaInputDStream会产生kafkaRDD,不同的topic partitions生成对应的的kafkarddpartitions,控制消费读取速度。操作数据的时候是compute直接构建出kafka rdd,读取kafka 上的数据。确定获取读取数据的期间就知道需要读取多少条数据,然后构建kafkardd实例。Kafkardd的实例和DirectKafkaInputDStream是一一对应的,每次compute会产生一个kafkardd,其会包含很多partitions,有多少partition就是对应多少kafkapartition。
看下KafkaRDDPartition就是一个简单的数据结构:
/** @param topic kafka topic name
* @param partition kafka
partition id
* @param fromOffset inclusive
starting offset
* @param untilOffset exclusive
ending offset
* @param host preferred kafka
host, i.e. the leader at the time the rdd was created
* @param port preferred kafka
host's port
*/
private[kafka]
class KafkaRDDPartition(
val index:
Int,
val topic: String,
val partition: Int,
val fromOffset: Long,
val untilOffset: Long,
val host: String,
val port: Int
) extends Partition {
/** Number of messages this partition refers to */
def count(): Long =
untilOffset - fromOffset
}
总结:
而且KafkaRDDPartition只能属于一个topic,不能让partition跨多个topic,直接消费一个kafkatopic,topic不断进来、数据不断偏移,Offset代表kafka数据偏移量指针。
数据不断流进kafka,batchDuration假如每十秒都会从配置的topic中消费数据,每次会消费一部分直到消费完,下一个batchDuration会再流进来的数据,又可以从头开始读或上一个数据的基础上读取数据。
思考直接抓取kafka数据和receiver读取数据:
好处一:
直接抓取fakfa数据的好处,没有缓存,不会出现内存溢出等之类的问题。但是如果kafka Receiver的方式读取会存在缓存的问题,需要设置读取的频率和block interval等信息。
好处二:
采用receiver方式的话receiver默认情况需要和worker的executor绑定,不方便做分布式,当然可以配置成分布式,采用direct方式默认情况下数据会存在多个worker上的executor。Kafkardd数据默认都是分布在多个executor上的,天然数据是分布式的存在多个executor,而receiver就不方便计算。
好处三:
数据消费的问题,在实际操作的时候采用receiver的方式有个弊端,消费数据来不及处理即操作数据有deLay多才时,Spark Streaming程序有可能奔溃。但如果是direct方式访问kafka数据不会存在此类情况。因为diect方式直接读取kafka数据,如果delay就不进行下一个batchDuration读取。
好处四:
完全的语义一致性,不会重复消费数据,而且保证数据一定被消费,跟kafka进行交互,只有数据真正执行成功之后才会记录下来。
生产环境下强烈建议采用direct方式读取kafka数据。
新浪微博:http://weibo.com/ilovepains
微信公众号:DT_Spark
博客:http://blog.sina.com.cn/ilovepains
手机:18610086859
QQ:1740415547
邮箱:18610086859@vip.126.com
Spark Streaming发行版笔记15
Spark Streaming no receivers彻底思考的更多相关文章
- Spark Streaming自定义Receivers
自定义一个Receiver class SocketTextStreamReceiver(host: String, port: Int( extends NetworkReceiver[String ...
- Spark Streaming源码解读之Job动态生成和深度思考
本期内容 : Spark Streaming Job生成深度思考 Spark Streaming Job生成源码解析 Spark Core中的Job就是一个运行的作业,就是具体做的某一件事,这里的JO ...
- Spark Streaming揭秘 Day15 No Receivers方式思考
Spark Streaming揭秘 Day15 No Receivers方式思考 在前面也有比较多的篇幅介绍了Receiver在SparkStreaming中的应用,但是我们也会发现,传统的Recei ...
- Spark Streaming源码解读之No Receivers彻底思考
本期内容 : Direct Acess Kafka Spark Streaming接收数据现在支持的两种方式: 01. Receiver的方式来接收数据,及输入数据的控制 02. No Receive ...
- 15、Spark Streaming源码解读之No Receivers彻底思考
在前几期文章里讲了带Receiver的Spark Streaming 应用的相关源码解读,但是现在开发Spark Streaming的应用越来越多的采用No Receivers(Direct Appr ...
- 9. Spark Streaming技术内幕 : Receiver在Driver的精妙实现全生命周期彻底研究和思考
原创文章,转载请注明:转载自 听风居士博客(http://www.cnblogs.com/zhouyf/) Spark streaming 程序需要不断接收新数据,然后进行业务逻辑 ...
- Spark Streaming源码解读之流数据不断接收和全生命周期彻底研究和思考
本节的主要内容: 一.数据接受架构和设计模式 二.接受数据的源码解读 Spark Streaming不断持续的接收数据,具有Receiver的Spark 应用程序的考虑. Receiver和Drive ...
- Spark Streaming源码解读之JobScheduler内幕实现和深度思考
本期内容 : JobScheduler内幕实现 JobScheduler深度思考 JobScheduler 是整个Spark Streaming调度的核心,需要设置多线程,一条用于接收数据不断的循环, ...
- Spark Streaming揭秘 Day35 Spark core思考
Spark Streaming揭秘 Day35 Spark core思考 Spark上的子框架,都是后来加上去的.都是在Spark core上完成的,所有框架一切的实现最终还是由Spark core来 ...
随机推荐
- Python3通过汉字输出拼音
https://github.com/mozillazg/python-pinyin # pip install pypinyin from pypinyin import pinyin, lazy_ ...
- 学习LoadRunner之C语言函数
学习LoadRunner之C语言函数 Action() { /*strchr和strrchr的区别*/ /* char *strTest1="citms citms"; char ...
- 将xml文件转为c#对像
读取xml文件数据,通过序列化反序列化转为List<T>对象后,对对象进行操作.
- cocos2d-js中怎么删除一个精灵
添加元素时,有Name属性 var child = parent.addChild(label, 1, "元素的名字"); 或者给child设置tag child.setTag(& ...
- LoggerAspect
package nc.oss.utils; import java.util.Date; import nc.bs.framework.common.InvocationInfoProxy; impo ...
- Vue 2.0 Application Sample
===搭建Demo=== http://blog.csdn.net/wangjiaohome/article/details/51728217 ===单页Application=== http://b ...
- CodeForces888E Maximum Subsequence(折半枚举+two-pointers)
题意 给定一个包含\(n\)个数的序列\(a\),在其中任选若干个数,使得他们的和对\(m\)取模后最大.(\(n\leq 35\)) 题解 显然,\(2^n\)的暴枚是不现实的...,于是我们想到了 ...
- thinkphp结合bootstrap打造个性化分页
分页功能是web开发中常见的一项功能,也存在很多形式,这里主要讲一下利用thinkPHP框架的page类来打造一款bootstrap风格的分页过程. 首先需要去thinkPHP官网现在其分页扩展类ht ...
- 设计模式之State模式
State模式定义: 允许一个对象在状态改变是,改变它的行为.看起来对象似乎修改了它的类. 模式理解(个人): State模式主要解决的事在开发中时常遇到的根据不同状态需要进行不同的处理操作的问题,而 ...
- [BZOJ1444]有趣的游戏(AC自动机+矩阵乘法)
n个等长字符串,机器会随机输出一个字符串(每个字母出现的概率为p[i]),问每个字符串第一个出现的概率是多少. 显然建出AC自动机,套路地f[i][j]表示i时刻位于节点j的概率. 构建转移矩阵,当i ...