storm-kafka源码走读之KafkaSpout
from: http://blog.csdn.net/wzhg0508/article/details/40903919
(五)storm-kafka源码走读之KafkaSpout
现在开始介绍KafkaSpout源码了。
开始时,在open方法中做一些初始化,
- ........................
- _state = new ZkState(stateConf);
- _connections = new DynamicPartitionConnections(_spoutConfig, KafkaUtils.makeBrokerReader(conf, _spoutConfig));
- // using TransactionalState like this is a hack
- int totalTasks = context.getComponentTasks(context.getThisComponentId()).size();
- if (_spoutConfig.hosts instanceof StaticHosts) {
- _coordinator = new StaticCoordinator(_connections, conf, _spoutConfig, _state, context.getThisTaskIndex(), totalTasks, _uuid);
- } else {
- _coordinator = new ZkCoordinator(_connections, conf, _spoutConfig, _state, context.getThisTaskIndex(), totalTasks, _uuid);
- }
- ............
前后省略了一些代码,关于metric这系列暂时不介绍。主要是初始化Zookeeper连接zkstate,把kafka Partition 与broker关系对应起来(初始化DynamicPartitionConnections),在DynamicPartitionConnections构造函数需要传入一个brokerReader,我们是zkHosts,看KafkaUtils代码就知道采用的是ZkBrokerReader,来看下ZkBrokerReader的构造函数代码
- public ZkBrokerReader(Map conf, String topic, ZkHosts hosts) {
- try {
- reader = new DynamicBrokersReader(conf, hosts.brokerZkStr, hosts.brokerZkPath, topic);
- cachedBrokers = reader.getBrokerInfo();
- lastRefreshTimeMs = System.currentTimeMillis();
- refreshMillis = hosts.refreshFreqSecs * 1000L;
- } catch (java.net.SocketTimeoutException e) {
- LOG.warn("Failed to update brokers", e);
- }
- }
有一个refreshMillis参数,这个参数是定时更新zk中partition的信息,
- //ZkBrokerReader
- @Override
- public GlobalPartitionInformation getCurrentBrokers() {
- long currTime = System.currentTimeMillis();
- if (currTime > lastRefreshTimeMs + refreshMillis) { // 当前时间大于和上次更新时间之差大于refreshMillis
- try {
- LOG.info("brokers need refreshing because " + refreshMillis + "ms have expired");
- cachedBrokers = reader.getBrokerInfo();
- lastRefreshTimeMs = currTime;
- } catch (java.net.SocketTimeoutException e) {
- LOG.warn("Failed to update brokers", e);
- }
- }
- return cachedBrokers;
- }
- // 下面是调用DynamicBrokersReader 的代码
- /**
- * Get all partitions with their current leaders
- */
- public GlobalPartitionInformation getBrokerInfo() throws SocketTimeoutException {
- GlobalPartitionInformation globalPartitionInformation = new GlobalPartitionInformation();
- try {
- int numPartitionsForTopic = getNumPartitions();
- String brokerInfoPath = brokerPath();
- for (int partition = 0; partition < numPartitionsForTopic; partition++) {
- int leader = getLeaderFor(partition);
- String path = brokerInfoPath + "/" + leader;
- try {
- byte[] brokerData = _curator.getData().forPath(path);
- Broker hp = getBrokerHost(brokerData);
- globalPartitionInformation.addPartition(partition, hp);
- } catch (org.apache.zookeeper.KeeperException.NoNodeException e) {
- LOG.error("Node {} does not exist ", path);
- }
- }
- } catch (SocketTimeoutException e) {
- throw e;
- } catch (Exception e) {
- throw new RuntimeException(e);
- }
- LOG.info("Read partition info from zookeeper: " + globalPartitionInformation);
- return globalPartitionInformation;
- }
GlobalPartitionInformation是一个Iterator类,存放了paritition与broker之间的对应关系,DynamicPartitionConnections中维护Kafka Consumer与parittion之间的关系,每个Consumer读取哪些paritition信息。这个COnnectionInfo信息会在storm.kafka.ZkCoordinator中会被初始化和更新,需要提到的一点是一个KafkaSpout包含一个SimpleConsumer
- //storm.kafka.DynamicPartitionConnections
- static class ConnectionInfo {
- SimpleConsumer consumer;
- Set<Integer> partitions = new HashSet();
- public ConnectionInfo(SimpleConsumer consumer) {
- this.consumer = consumer;
- }
- }
再看ZkCoordinator类,看其构造函数
- //storm.kafka.ZkCoordinator
- public ZkCoordinator(DynamicPartitionConnections connections, Map stormConf, SpoutConfig spoutConfig, ZkState state, int taskIndex, int totalTasks, String topologyInstanceId, DynamicBrokersReader reader) {
- _spoutConfig = spoutConfig;
- _connections = connections;
- _taskIndex = taskIndex;
- _totalTasks = totalTasks;
- _topologyInstanceId = topologyInstanceId;
- _stormConf = stormConf;
- _state = state;
- ZkHosts brokerConf = (ZkHosts) spoutConfig.hosts;
- _refreshFreqMs = brokerConf.refreshFreqSecs * 1000;
- _reader = reader;
- }
_refreshFreqMs就是定时更新zk partition到本地的操作,在kafkaSpout中nextTuple方法中每次都会去调用ZkCoordinator的getMyManagedPartitions方法。该方法根据_refreshFreqMs参数定时更新partition信息
- //storm.kafka.ZkCoordinator
- @Override
- public List<PartitionManager> getMyManagedPartitions() {
- if (_lastRefreshTime == null || (System.currentTimeMillis() - _lastRefreshTime) > _refreshFreqMs) {
- refresh();
- _lastRefreshTime = System.currentTimeMillis();
- }
- return _cachedList;
- }
- @Override
- public void refresh() {
- try {
- LOG.info(taskId(_taskIndex, _totalTasks) + "Refreshing partition manager connections");
- GlobalPartitionInformation brokerInfo = _reader.getBrokerInfo();
- List<Partition> mine = KafkaUtils.calculatePartitionsForTask(brokerInfo, _totalTasks, _taskIndex);
- Set<Partition> curr = _managers.keySet();
- Set<Partition> newPartitions = new HashSet<Partition>(mine);
- newPartitions.removeAll(curr);
- Set<Partition> deletedPartitions = new HashSet<Partition>(curr);
- deletedPartitions.removeAll(mine);
- LOG.info(taskId(_taskIndex, _totalTasks) + "Deleted partition managers: " + deletedPartitions.toString());
- for (Partition id : deletedPartitions) {
- PartitionManager man = _managers.remove(id);
- man.close();
- }
- LOG.info(taskId(_taskIndex, _totalTasks) + "New partition managers: " + newPartitions.toString());
- for (Partition id : newPartitions) {
- PartitionManager man = new PartitionManager(_connections, _topologyInstanceId, _state, _stormConf, _spoutConfig, id);
- _managers.put(id, man);
- }
- } catch (Exception e) {
- throw new RuntimeException(e);
- }
- _cachedList = new ArrayList<PartitionManager>(_managers.values());
- LOG.info(taskId(_taskIndex, _totalTasks) + "Finished refreshing");
- }
其中每个Consumer分配partition的算法是KafkaUtils.calculatePartitionsForTask(brokerInfo, _totalTasks, _taskIndex);
主要做的工作就是获取并行的task数,与当前partition做比较,得出一个COnsumer要负责哪些parititons的读取,具体算法去kafka文档吧
以上在KafkaSpout中做完了初始化操作,下面开始取数据发射数据了,来看nextTuple方法
- // storm.kafka.KafkaSpout
- @Override
- public void nextTuple() {
- List<PartitionManager> managers = _coordinator.getMyManagedPartitions();
- for (int i = 0; i < managers.size(); i++) {
- try {
- // in case the number of managers decreased
- _currPartitionIndex = _currPartitionIndex % managers.size();
- EmitState state = managers.get(_currPartitionIndex).next(_collector);
- if (state != EmitState.EMITTED_MORE_LEFT) {
- _currPartitionIndex = (_currPartitionIndex + 1) % managers.size();
- }
- if (state != EmitState.NO_EMITTED) {
- break;
- }
- } catch (FailedFetchException e) {
- LOG.warn("Fetch failed", e);
- _coordinator.refresh();
- }
- }
- long now = System.currentTimeMillis();
- if ((now - _lastUpdateMs) > _spoutConfig.stateUpdateIntervalMs) {
- commit();
- }
- }
看完上述代码可知,所有的操作都是在PartitionManager中进行的,PartitionManager中会读取message信息,然后进行发射,主要逻辑在PartitionManager的next方法中
- //returns false if it's reached the end of current batch
- public EmitState next(SpoutOutputCollector collector) {
- if (_waitingToEmit.isEmpty()) {
- fill();
- }
- while (true) {
- MessageAndRealOffset toEmit = _waitingToEmit.pollFirst();
- if (toEmit == null) {
- return EmitState.NO_EMITTED;
- }
- Iterable<List<Object>> tups = KafkaUtils.generateTuples(_spoutConfig, toEmit.msg);
- if (tups != null) {
- for (List<Object> tup : tups) {
- collector.emit(tup, new KafkaMessageId(_partition, toEmit.offset));
- }
- break;
- } else {
- ack(toEmit.offset);
- }
- }
- if (!_waitingToEmit.isEmpty()) {
- return EmitState.EMITTED_MORE_LEFT;
- } else {
- return EmitState.EMITTED_END;
- }
- }
如果_waitingToEmit列表为空,则去读取msg,然后进行逐条发射,每发射一条,break一下,返回EMIT_MORE_LEFT给KafkaSpout的nextTuple方法中,,然后进行判断是否该paritition读取的一次读取的message buffer size是否已发射完毕,如果发射完毕就进行下一个partition 数据读取和发射,
注意的一点是,并不是一次把该partition的所有待发射的msg都发射完再commit offset到zk,而是发射一条,判断一下是否到了该commit的时候了(开始时设置的定时commit时间间隔),笔者认为这样做的原因是为了好控制fail
KafkaSpout中的ack,fail,commit操作全部交给了PartitionManager来做,看代码
- @Override
- public void ack(Object msgId) {
- KafkaMessageId id = (KafkaMessageId) msgId;
- PartitionManager m = _coordinator.getManager(id.partition);
- if (m != null) {
- m.ack(id.offset);
- }
- }
- @Override
- public void fail(Object msgId) {
- KafkaMessageId id = (KafkaMessageId) msgId;
- PartitionManager m = _coordinator.getManager(id.partition);
- if (m != null) {
- m.fail(id.offset);
- }
- }
- @Override
- public void deactivate() {
- commit();
- }
- @Override
- public void declareOutputFields(OutputFieldsDeclarer declarer) {
- declarer.declare(_spoutConfig.scheme.getOutputFields());
- }
- private void commit() {
- _lastUpdateMs = System.currentTimeMillis();
- for (PartitionManager manager : _coordinator.getMyManagedPartitions()) {
- manager.commit();
- }
- }
所以PartitionManager是KafkaSpout的核心,很晚了,都3点多了,后续会不上PartitionManager的分析,晚安
- 本文已收录于以下专栏:
- Storm-kafka源码浅谈
storm-kafka源码走读之KafkaSpout的更多相关文章
- twitter storm 源码走读之5 -- worker进程内部消息传递处理和数据结构分析
欢迎转载,转载请注明出处,徽沪一郎. 本文从外部消息在worker进程内部的转化,传递及处理过程入手,一步步分析在worker-data中的数据项存在的原因和意义.试图从代码实现的角度来回答,如果是从 ...
- kafka源码分析之一server启动分析
0. 关键概念 关键概念 Concepts Function Topic 用于划分Message的逻辑概念,一个Topic可以分布在多个Broker上. Partition 是Kafka中横向扩展和一 ...
- Apache Spark源码走读之23 -- Spark MLLib中拟牛顿法L-BFGS的源码实现
欢迎转载,转载请注明出处,徽沪一郎. 概要 本文就拟牛顿法L-BFGS的由来做一个简要的回顾,然后就其在spark mllib中的实现进行源码走读. 拟牛顿法 数学原理 代码实现 L-BFGS算法中使 ...
- Apache Spark源码走读之16 -- spark repl实现详解
欢迎转载,转载请注明出处,徽沪一郎. 概要 之所以对spark shell的内部实现产生兴趣全部缘于好奇代码的编译加载过程,scala是需要编译才能执行的语言,但提供的scala repl可以实现代码 ...
- Apache Spark源码走读之13 -- hiveql on spark实现详解
欢迎转载,转载请注明出处,徽沪一郎 概要 在新近发布的spark 1.0中新加了sql的模块,更为引人注意的是对hive中的hiveql也提供了良好的支持,作为一个源码分析控,了解一下spark是如何 ...
- Apache Spark源码走读之7 -- Standalone部署方式分析
欢迎转载,转载请注明出处,徽沪一郎. 楔子 在Spark源码走读系列之2中曾经提到Spark能以Standalone的方式来运行cluster,但没有对Application的提交与具体运行流程做详细 ...
- Kakfa揭秘 Day3 Kafka源码概述
Kakfa揭秘 Day3 Kafka源码概述 今天开始进入Kafka的源码,本次学习基于最新的0.10.0版本进行.由于之前在学习Spark过程中积累了很多的经验和思想,这些在kafka上是通用的. ...
- Kafka 源码剖析
1.概述 在对Kafka使用层面掌握后,进一步提升分析其源码是极有必要的.纵观Kafka源码工程结构,不算太复杂,代码量也不算大.分析研究其实现细节难度不算太大.今天笔者给大家分析的是其核心处理模块, ...
- apache kafka & CDH kafka源码编译
Apache kafka编译 前言 github网站kafka项目的README.md有关于kafka源码编译的说明 github地址:https://github.com/apache/kafka ...
随机推荐
- 设计模式(三) cglib代理
1.1.cglib代理,也可也叫子类代理 Cglib代理,也叫做子类代理.我们知道,JDK的动态代理机制只能代理实现了接口的类,而不能实现接口的类就不能使用JDK的动态代理.cglib是针对类来实现代 ...
- js keyCode(键盘键码)
摘自:http://blog.csdn.net/dyllove98/article/details/8728657 * 网上收集的KeyCode值方便大家查找: keycode 8 = BackSpa ...
- Java开发资料汇编
Java开发常识资料 一.Java基础JSE 核心基础(程序设计语言): <Think in java> (参考阅读:<Core Java>JAVA2核心技术 ...
- 【JavaScript】动态的小球
参考: 1.CSS 对比 JavaScript 动画 2.CSS制作水平垂直居中对齐_水平居中, 垂直居中 教程_w3cplus:https://www.w3cplus.com/css/vertica ...
- h5新特性--- 多媒体元素
在H5中只有一行代码即可实现在页面中插入视频 <video src="插入的视频的名字" controls></video> 可以指明视频的宽度和高度 &l ...
- tesseract 3.05 release 编译
tesseract 3.05 release版本的对应配置好的vs2015工程.偷懒必备,毕竟依赖那么多库,环境配置还是要费点事的.https://github.com/peirick/VS2015_ ...
- MapReduce:将下面的两排数字先按第一排排序,然后再按第二排排序,要求顺序排序
MapReduce:将下面的两排数字先按第一排排序,然后再按第二排排序,要求顺序排序 文件如下: 这个案例主要考察我们对排序的理解,我们可以这样做: 代码如下(由于水平有限,不保证完全正确,如果发现错 ...
- MATLAB安装libsvm工具箱的方法
支持向量机(support vector machine,SVM)是机器学习中一种流行的学习算法,在分类与回归分析中发挥着重要作用.基于SVM算法开发的工具箱有很多种,下面我们要安装的是十分受欢迎的l ...
- LeetCode——Word Break
Question Given a string s and a dictionary of words dict, determine if s can be segmented into a spa ...
- 利用ECharts开发的步骤
引入Echarts的相关库文件,以及自定义的js文件 <script src="${pageContext.request.contextPath}/js/echarts/source ...