数据接入Spark Streaming的二种方式:Receiver和no receivers方式

建议企业级采用no receivers方式开发Spark Streaming应用程序,好处:

1、更优秀的自由度控制

2、语义一致性

no receivers更符合数据读取和数据操作,Spark 计算框架底层有数据来源,如果只有direct直接操作数据来源则更天然。操作数据来源封装其一定是rdd级别的。

所以Spark 推出了自定义的rdd即Kafkardd,只是数据来源不同。

进入源码区:

注释基于Batch消费数据,首先确定开始和结束的offSet,特别强调语义一致性。

关键是metaData.broker.list,通过bootstrap.servers直接操作Kafka集群,操作kafka数据是一个offset范围。

/**
 * A batch-oriented interface for
consuming from Kafka.
 * Starting and ending offsets are
specified in advance,
 * so that you can control exactly-once
semantics.
 * @param kafkaParams Kafka
<a href="http://kafka.apache.org/documentation.html#configuration">
 * configuration parameters
</a>. Requires "metadata.broker.list" or
"bootstrap.servers" to be set
 * with Kafka broker(s) specified in
host1:port1,host2:port2 form.
 * @param offsetRanges offset
ranges that define the Kafka data belonging to this RDD
 * @param messageHandler function
for translating each message into the desired type
 */
private[kafka]
class KafkaRDD[
  K:
ClassTag,
  V:
ClassTag,
  U <:
Decoder[_]: ClassTag,
  T <:
Decoder[_]: ClassTag,
  R:
ClassTag] private[spark] (
    sc: SparkContext,
    kafkaParams: Map[String, String],
    val offsetRanges: Array[OffsetRange],
    leaders: Map[TopicAndPartition, (String, Int)],
    messageHandler: MessageAndMetadata[K, V] => R
  ) extends
RDD[R](sc, Nil) with Logging with
HasOffsetRanges {

你要直接访问Kafka中的数据需要自定义一个KafkaRDD,如果读取hBase上的数据

也必须自定义一个hBaseRDD。有一点必须定义接口HasOffsetRange,RDD天然的是一个

A List Partitions,基于kafka直接访问RDD时必须是HasOffsetRange类型,代表了

来自kafka topicAndParttion,其实力被HasOffsetRange Create创建,从fromOffset到untilOffset ,

分布式传输Offset数据时必须序列化。

/**
 * Represents any object that has a
collection of
[[OffsetRange]]s. This can be used to access the
 * offset ranges in RDDs generated by the
direct Kafka DStream (see
 *
[[KafkaUtils.createDirectStream()]]).
 *
{{{
 *  
KafkaUtils.createDirectStream(...).foreachRDD { rdd =>
 *     
val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
 *     
...
 *  
}
 *
}}}
 */
trait HasOffsetRanges
{
  def
offsetRanges: Array[OffsetRange]
}

/**
 * Represents a range of offsets from a
single Kafka TopicAndPartition. Instances of this class
 * can be created with
`OffsetRange.create()`.
 * @param topic Kafka topic name
 * @param partition Kafka
partition id
 * @param fromOffset Inclusive
starting offset
 * @param untilOffset Exclusive
ending offset
 */
final class OffsetRange
private(
    val
topic: String,
    val partition:
Int,
    val fromOffset:
Long,
    val untilOffset:
Long) extends Serializable {
  import
OffsetRange.OffsetRangeTuple

/**
Kafka TopicAndPartition object, for convenience */
 
def
topicAndPartition(): TopicAndPartition = TopicAndPartition(topic, partition)

/**
Number of messages this OffsetRange refers to */
 
def
count(): Long = untilOffset - fromOffset

Offset是消息偏移量,假设untilOffset是10万,fromOffset是5万,第10万条消息

和5万条消息,一般处理数据规模大小是以数据条数为单位。

创建一个offSetrange实例时可以确定从kafka集群partition中读取哪些topic,从

foreachrdd中可以获得当前rdd访问的所有分区数据。Batch Duration中产生的rdd的分区数据,这个是对元数据的控制。

在看getPartitions方法,offsetRanges指定了每个offsetrange从什么位置开始到什么位置结束。

override def getPartitions: Array[Partition] = {
  offsetRanges.zipWithIndex.map { case (o, i)
=>
      val
(host, port) = leaders(TopicAndPartition(o.topic, o.partition))
      new
KafkaRDDPartition(i, o.topic, o.partition, o.fromOffset, o.untilOffset, host, port)

}.toArray
}

看KafkaRDDPartition类,会从传入的topic和partition及offset中获取kafka数据

/** @param topic kafka topic name
  * @param partition kafka
partition id
  * @param fromOffset inclusive
starting offset
  * @param untilOffset exclusive
ending offset
  * @param host preferred kafka
host, i.e. the leader at the time the rdd was created
  * @param port preferred kafka
host's port
  */
private[kafka]
class KafkaRDDPartition(
  val index:
Int,
  val topic: String,
  val partition: Int,
  val fromOffset: Long,
  val untilOffset: Long,
  val host: String,
  val port: Int
) extends Partition {
  /** Number of messages this partition refers to */
 
def count(): Long =
untilOffset - fromOffset
}

Host port指定读取数据来源的kfakf机器。

看kafka rdd的compute计算每个数据分片,和rdd理念是一样的,每次迭代操作获取计算的rdd一部分。

操作KafkaRDDIterator和操作rdd分片是一样的,需要迭代数据分片:

override def compute(thePart: Partition, context: TaskContext): Iterator[R] = {
  val part
= thePart.asInstanceOf[KafkaRDDPartition]
  assert(part.fromOffset <=
part.untilOffset, errBeginAfterEnd(part))
  if (part.fromOffset
== part.untilOffset) {
    log.info(s"Beginning offset ${part.fromOffset} is the same as ending offset " +
      s"skipping ${part.topic}
${part.partition}")
    Iterator.empty
 
} else
{
    new KafkaRDDIterator(part, context)
  }
}

private class KafkaRDDIterator(
    part: KafkaRDDPartition,
    context: TaskContext) extends NextIterator[R] {

context.addTaskCompletionListener{
context => closeIfNeeded() }

log.info(s"Computing topic ${part.topic}, partition ${part.partition}
" +
    s"offsets ${part.fromOffset} -> ${part.untilOffset}")

val kc = new KafkaCluster(kafkaParams)
  val keyDecoder
= classTag[U].runtimeClass.getConstructor(classOf[VerifiableProperties])
    .newInstance(kc.config.props)
    .asInstanceOf[Decoder[K]]
  val valueDecoder
= classTag[T].runtimeClass.getConstructor(classOf[VerifiableProperties])
   
.newInstance(kc.config.props)
    .asInstanceOf[Decoder[V]]
  val consumer
= connectLeader
  var requestOffset
= part.fromOffset
  var iter: Iterator[MessageAndOffset]
= null

// The idea is to use the provided preferred host, except
on task retry attempts,
  // to minimize number of kafka metadata
requests
  private def connectLeader: SimpleConsumer
= {
    if (context.attemptNumber
> 0) {
      kc.connectLeader(part.topic, part.partition).fold(
        errs => throw new SparkException(
          s"Couldn't connect to leader for topic ${part.topic} ${part.partition}: " +
            errs.mkString("\n")),
        consumer => consumer
      )
    } else {
      kc.connect(part.host, part.port)
    }
  }

private def handleFetchErr(resp:
FetchResponse) {
    if (resp.hasError)
{
      val err
= resp.errorCode(part.topic, part.partition)
      if (err
== ErrorMapping.LeaderNotAvailableCode
||
        err == ErrorMapping.NotLeaderForPartitionCode) {
        log.error(s"Lost leader for topic ${part.topic} partition ${part.partition}, " +
          s" sleeping for ${kc.config.refreshLeaderBackoffMs}ms")
        Thread.sleep(kc.config.refreshLeaderBackoffMs)
      }
      // Let normal rdd retry sort out reconnect attempts
      throw ErrorMapping.exceptionFor(err)
    }
  }

关键的地方kafkaCluster对象时在kafkaUtils中直接创建了directStream,看下之前操作kafka代码发现传入的参数是上下文、 broker.List.topic.list参数:

构建时传入topics为Set,当然可以直接指定ranges,他从kafka集群直接创建了kafkaCluster和集群进行交互,从fromOffset获取数据具体的偏移量:

/**
 * Create an input stream that directly
pulls messages from Kafka Brokers
 * without using any receiver. This
stream can guarantee that each message
 * from Kafka is included in
transformations exactly once (see points below).
 *
 * Points to note:
 * 
- No receivers: This stream does not use any receiver. It directly
queries Kafka
 * 
- Offsets: This does not use Zookeeper to store offsets. The consumed
offsets are tracked
 *   
by the stream itself. For interoperability with Kafka monitoring tools
that depend on
 *   
Zookeeper, you have to update Kafka/Zookeeper yourself from the
streaming application.
 *   
You can access the offsets used in each batch from the generated RDDs
(see
 *   
[[org.apache.spark.streaming.kafka.HasOffsetRanges]]).
 * 
- Failure Recovery: To recover from driver failures, you have to enable
checkpointing
 *   
in the
[[StreamingContext]]. The information on consumed offset can be
 *   
recovered from the checkpoint. See the programming guide for details
(constraints, etc.).
 * 
- End-to-end semantics: This stream ensures that every records is
effectively received and
 *   
transformed exactly once, but gives no guarantees on whether the
transformed data are
 *   
outputted exactly once. For end-to-end exactly-once semantics, you have
to either ensure
 *   
that the output operation is idempotent, or use transactions to output
records atomically.
 *   
See the programming guide for more details.
 *
 * @param ssc StreamingContext
object
 * @param kafkaParams Kafka
<a href="http://kafka.apache.org/documentation.html#configuration">
 *  
configuration parameters
</a>. Requires "metadata.broker.list" or
"bootstrap.servers"
 *  
to be set with Kafka broker(s) (NOT zookeeper servers), specified in
 *  
host1:port1,host2:port2 form.
 *  
If not starting from a checkpoint, "auto.offset.reset" may be
set to "largest" or "smallest"
 *  
to determine where the stream starts (defaults to "largest")
 * @param topics Names of the
topics to consume
 * @tparam K type of Kafka message
key
 * @tparam V type of Kafka message
value
 * @tparam KD type of Kafka message
key decoder
 * @tparam VD type of Kafka
message value decoder
 * @return DStream of (Kafka
message key, Kafka message value)
 */
def createDirectStream[
  K: ClassTag,
  V: ClassTag,
  KD <: Decoder[K]:
ClassTag,
  VD <: Decoder[V]:
ClassTag] (
    ssc: StreamingContext,
    kafkaParams:
Map[String, String],
    topics:
Set[String]
): InputDStream[(K, V)] = {
  val
messageHandler = (mmd: MessageAndMetadata[K, V]) => (mmd.key, mmd.message)
  val
kc = new KafkaCluster(kafkaParams)
  val
fromOffsets = getFromOffsets(kc, kafkaParams, topics)
  new
DirectKafkaInputDStream[K, V, KD, VD, (K, V)](
    ssc, kafkaParams, fromOffsets, messageHandler)

看下getFromOffsets方法:

private[kafka] def getFromOffsets(
    kc: KafkaCluster,
    kafkaParams: Map[String, String],
    topics: Set[String]
  ): Map[TopicAndPartition, Long] = {
  val reset
= kafkaParams.get("auto.offset.reset").map(_.toLowerCase)
  val result
= for {
    topicPartitions <-
kc.getPartitions(topics).right
    leaderOffsets <- (if (reset == Some("smallest")) {
     
kc.getEarliestLeaderOffsets(topicPartitions)
    } else {
     
kc.getLatestLeaderOffsets(topicPartitions)
    }).right
  } yield {
    leaderOffsets.map { case (tp, lo) =>
        (tp, lo.offset)
    }
  }
  KafkaCluster.checkErrors(result)
}

如果不知道fromOffsets的话直接从配置中获取fromOffsets,创建kafka DirectKafkaInputDStream的时候会从kafka集群进行交互获得partition、offset信息,通过DirectKafkaInputDStream无论什么情况最后都会创建DirectKafkaInputDStream:

/**
 * 
A stream of {@link org.apache.spark.streaming.kafka.KafkaRDD}
where
 * each given Kafka topic/partition
corresponds to an RDD partition.
 * The spark configuration
spark.streaming.kafka.maxRatePerPartition gives the maximum number
 * 
of messages
 * per second that each
'''partition''' will accept.
 * Starting offsets are specified in
advance,
 * and this DStream is not responsible
for committing offsets,
 * so that you can control exactly-once
semantics.
 * For an easy interface to Kafka-managed
offsets,
 * 
see {@link org.apache.spark.streaming.kafka.KafkaCluster}
 * @param kafkaParams Kafka
<a href="http://kafka.apache.org/documentation.html#configuration">
 * configuration parameters
</a>.
 *  
Requires "metadata.broker.list" or
"bootstrap.servers" to be set with Kafka broker(s),
 *  
NOT zookeeper servers, specified in host1:port1,host2:port2 form.
 * @param fromOffsets per-topic/partition
Kafka offsets defining the (inclusive)
 * 
starting point of the stream
 * @param messageHandler function
for translating each message into the desired type
 */
private[streaming]
class DirectKafkaInputDStream[
  K:
ClassTag,
  V:
ClassTag,
  U <:
Decoder[K]: ClassTag,
  T <:
Decoder[V]: ClassTag,
  R:
ClassTag](
    ssc_ : StreamingContext,
    val kafkaParams: Map[String, String],
    val fromOffsets: Map[TopicAndPartition, Long],
    messageHandler: MessageAndMetadata[K, V] => R
  ) extends
InputDStream[R](ssc_)
with Logging {
  val maxRetries
= context.sparkContext.getConf.getInt(
    "spark.streaming.kafka.maxRetries", 1)

DirectKafkaInputDStream会产生kafkaRDD,不同的topic partitions生成对应的的kafkarddpartitions,控制消费读取速度。操作数据的时候是compute直接构建出kafka rdd,读取kafka 上的数据。确定获取读取数据的期间就知道需要读取多少条数据,然后构建kafkardd实例。Kafkardd的实例和DirectKafkaInputDStream是一一对应的,每次compute会产生一个kafkardd,其会包含很多partitions,有多少partition就是对应多少kafkapartition。

看下KafkaRDDPartition就是一个简单的数据结构:

/** @param topic kafka topic name
  * @param partition kafka
partition id
  * @param fromOffset inclusive
starting offset
  * @param untilOffset exclusive
ending offset
  * @param host preferred kafka
host, i.e. the leader at the time the rdd was created
  * @param port preferred kafka
host's port
  */
private[kafka]
class KafkaRDDPartition(
  val index:
Int,
  val topic: String,
  val partition: Int,
  val fromOffset: Long,
  val untilOffset: Long,
  val host: String,
  val port: Int
) extends Partition {
  /** Number of messages this partition refers to */
 
def count(): Long =
untilOffset - fromOffset
}

总结:

而且KafkaRDDPartition只能属于一个topic,不能让partition跨多个topic,直接消费一个kafkatopic,topic不断进来、数据不断偏移,Offset代表kafka数据偏移量指针。

数据不断流进kafka,batchDuration假如每十秒都会从配置的topic中消费数据,每次会消费一部分直到消费完,下一个batchDuration会再流进来的数据,又可以从头开始读或上一个数据的基础上读取数据。

思考直接抓取kafka数据和receiver读取数据:

好处一:

直接抓取fakfa数据的好处,没有缓存,不会出现内存溢出等之类的问题。但是如果kafka Receiver的方式读取会存在缓存的问题,需要设置读取的频率和block interval等信息。

好处二:

采用receiver方式的话receiver默认情况需要和worker的executor绑定,不方便做分布式,当然可以配置成分布式,采用direct方式默认情况下数据会存在多个worker上的executor。Kafkardd数据默认都是分布在多个executor上的,天然数据是分布式的存在多个executor,而receiver就不方便计算。

好处三:

数据消费的问题,在实际操作的时候采用receiver的方式有个弊端,消费数据来不及处理即操作数据有deLay多才时,Spark Streaming程序有可能奔溃。但如果是direct方式访问kafka数据不会存在此类情况。因为diect方式直接读取kafka数据,如果delay就不进行下一个batchDuration读取。

好处四:

完全的语义一致性,不会重复消费数据,而且保证数据一定被消费,跟kafka进行交互,只有数据真正执行成功之后才会记录下来。

生产环境下强烈建议采用direct方式读取kafka数据。

新浪微博:http://weibo.com/ilovepains

微信公众号:DT_Spark

博客:http://blog.sina.com.cn/ilovepains

手机:18610086859

QQ:1740415547

邮箱:18610086859@vip.126.com

Spark Streaming发行版笔记15

Spark Streaming no receivers彻底思考的更多相关文章

  1. Spark Streaming自定义Receivers

    自定义一个Receiver class SocketTextStreamReceiver(host: String, port: Int( extends NetworkReceiver[String ...

  2. Spark Streaming源码解读之Job动态生成和深度思考

    本期内容 : Spark Streaming Job生成深度思考 Spark Streaming Job生成源码解析 Spark Core中的Job就是一个运行的作业,就是具体做的某一件事,这里的JO ...

  3. Spark Streaming揭秘 Day15 No Receivers方式思考

    Spark Streaming揭秘 Day15 No Receivers方式思考 在前面也有比较多的篇幅介绍了Receiver在SparkStreaming中的应用,但是我们也会发现,传统的Recei ...

  4. Spark Streaming源码解读之No Receivers彻底思考

    本期内容 : Direct Acess Kafka Spark Streaming接收数据现在支持的两种方式: 01. Receiver的方式来接收数据,及输入数据的控制 02. No Receive ...

  5. 15、Spark Streaming源码解读之No Receivers彻底思考

    在前几期文章里讲了带Receiver的Spark Streaming 应用的相关源码解读,但是现在开发Spark Streaming的应用越来越多的采用No Receivers(Direct Appr ...

  6. 9. Spark Streaming技术内幕 : Receiver在Driver的精妙实现全生命周期彻底研究和思考

        原创文章,转载请注明:转载自 听风居士博客(http://www.cnblogs.com/zhouyf/)       Spark streaming 程序需要不断接收新数据,然后进行业务逻辑 ...

  7. Spark Streaming源码解读之流数据不断接收和全生命周期彻底研究和思考

    本节的主要内容: 一.数据接受架构和设计模式 二.接受数据的源码解读 Spark Streaming不断持续的接收数据,具有Receiver的Spark 应用程序的考虑. Receiver和Drive ...

  8. Spark Streaming源码解读之JobScheduler内幕实现和深度思考

    本期内容 : JobScheduler内幕实现 JobScheduler深度思考 JobScheduler 是整个Spark Streaming调度的核心,需要设置多线程,一条用于接收数据不断的循环, ...

  9. Spark Streaming揭秘 Day35 Spark core思考

    Spark Streaming揭秘 Day35 Spark core思考 Spark上的子框架,都是后来加上去的.都是在Spark core上完成的,所有框架一切的实现最终还是由Spark core来 ...

随机推荐

  1. 【Maven】maven的常用命令以及搭建maven私人仓库

    一.maven环境搭建 1. 二.maven常用命令 1.创建一个新的项目: mvn archetype:create -DgroupId=com.puyangsky.test -DartifactI ...

  2. 输入输出格式之Python版

    # 有多组输入数据,但没有具体的告诉你有多少组,只是让你对应每组输入,应该怎样输出. while True: try: a, b = map(int, raw_input().strip().spli ...

  3. luogu P1137 旅行计划

    题目描述 小明要去一个国家旅游.这个国家有N个城市,编号为1-N,并且有M条道路连接着,小明准备从其中一个城市出发,并只往东走到城市i停止. 所以他就需要选择最先到达的城市,并制定一条路线以城市i为终 ...

  4. [P4063][JXOI2017]数列(DP)

    题目描述 九条可怜手上有一个长度为 n 的整数数列 ri,她现在想要构造一个长度为 n 的,满足如下条件的整数数列 A: • 1 ≤ Ai ≤ ri. • 对于任意 3 ≤ i ≤ n,令 R 为 A ...

  5. JZYZOJ1390【noi2001】炮兵阵地 状压DP

    http://172.20.6.3/Problem_Show.asp?id=1390 需要储存该行和上一行两个状态.通过观察规则可以发现条件允许的状态很少(相邻两个至少空两格),据此可以减少状态数量, ...

  6. 【线段树】Atlantis

    Atlantis Time Limit: 1000MS   Memory Limit: 10000K Total Submissions: 23181   Accepted: 8644 Descrip ...

  7. 【数论】nefu119 组合素数

    算组合数中的素因子p的个数,基本同这题 http://www.cnblogs.com/autsky-jadek/p/6592194.html #include<cstdio> using ...

  8. [Luogu2656]采蘑菇

    题目大意: 给你一个有向图,每条边有一个边权w以及恢复系数k, 你从s点出发乱走,经过某条边时会获得相应的收益w,而当第二次经过这条边时相应的收益为w*k下取整. 问你最大能获得的收益为多少? 思路: ...

  9. 1.创建spring cloud父工程和子模块

    创建父工程 idea创建父工程 idea创建一个工程.父工程管理公共资源 添加子模块 选择添加到父工程里面spring_cloud_parent 相应的子模块添加到父工程的pom.xml文件里

  10. 1.2(JavaScript学习笔记)JavaScript HTML DOM

    一.DOM DOM全称为document object model(文档对象模型). 此处的文档指当前HTML文档,对象指HTML标签. 当网页被加载时,浏览器会创建页面的文档对象模型. 下面结合具体 ...