MetadataCache什么时候更新

updateCache方法用来更新缓存的。

发起线程 controller-event-thread

controller选举的时候

CLASS_NAME METHOD_NAME LINE_NUM
kafka/controller/KafkaController sendUpdateMetadataRequest 1043
kafka/controller/KafkaController onControllerFailover 288
kafka/controller/KafkaController elect 1658
kafka/controller/KafkaController$Startup$ process 1581
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply$mcV$sp 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/metrics/KafkaTimer time 32
kafka/controller/ControllerEventManager$ControllerEventThread doWork 64
kafka/utils/ShutdownableThread run 70

启动的时候选举,启动这个动作也是个事件


// KafkaController.scala
case object Startup extends ControllerEvent { def state = ControllerState.ControllerChange override def process(): Unit = {
registerSessionExpirationListener()
registerControllerChangeListener()
elect()
} }

broker启动的时候

CLASS_NAME METHOD_NAME LINE_NUM
kafka/controller/KafkaController sendUpdateMetadataRequest 1043
kafka/controller/KafkaController onBrokerStartup 387
kafka/controller/KafkaController$BrokerChange process 1208
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply$mcV$sp 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/metrics/KafkaTimer time 32
kafka/controller/ControllerEventManager$ControllerEventThread doWork 64
kafka/utils/ShutdownableThread run 70

topic删除的时候

CLASS_NAME METHOD_NAME LINE_NUM
kafka/controller/KafkaController sendUpdateMetadataRequest 1043
kafka/controller/TopicDeletionManager kafka$controller$TopicDeletionManager$$onTopicDeletion 268
kafka/controller/TopicDeletionManager$$anonfun$resumeDeletions$2 apply 333
kafka/controller/TopicDeletionManager$$anonfun$resumeDeletions$2 apply 333
scala/collection/immutable/Set$Set1 foreach 94
kafka/controller/TopicDeletionManager resumeDeletions 333
kafka/controller/TopicDeletionManager enqueueTopicsForDeletion 110
kafka/controller/KafkaController$TopicDeletion process 1280
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply$mcV$sp 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/metrics/KafkaTimer time 32
kafka/controller/ControllerEventManager$ControllerEventThread doWork 64
kafka/utils/ShutdownableThread run 70

topic创建或者修改的时候

CLASS_NAME METHOD_NAME LINE_NUM
kafka/controller/ControllerBrokerRequestBatch updateMetadataRequestBrokerSet 291
kafka/controller/ControllerBrokerRequestBatch newBatch 294
kafka/controller/PartitionStateMachine handleStateChanges 105
kafka/controller/KafkaController onNewPartitionCreation 499
kafka/controller/KafkaController onNewTopicCreation 485
kafka/controller/KafkaController$TopicChange process 1237
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply$mcV$sp 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/metrics/KafkaTimer time 32
kafka/controller/ControllerEventManager$ControllerEventThread doWork 64
kafka/utils/ShutdownableThread run 70

topic创建这个是从队列中拿到事件再处理的方式

队列是kafka.controller.ControllerEventManager.queue

放入过程如下,本质还是监听zk的path的child的变化:

CLASS_NAME METHOD_NAME LINE_NUM
kafka/controller/ControllerEventManager put 44
kafka/controller/TopicChangeListener handleChildChange 1712
org/I0Itec/zkclient/ZkClient$10 run 848
org/I0Itec/zkclient/ZkEventThread run 85

注册监听器的代码如下:

// class KafkaController
private def registerTopicChangeListener() = {
zkUtils.subscribeChildChanges(BrokerTopicsPath, topicChangeListener)
}

顺带说一下有6个地方订阅了zk的子节点的变化:

  • DynamicConfigManager.startup
  • registerTopicChangeListener
  • registerIsrChangeNotificationListener
  • registerTopicDeletionListener
  • registerBrokerChangeListener
  • registerLogDirEventNotificationListener

处理创建topic事件:

// ControllerChannelManager.scala  class ControllerBrokerRequestBatch
def sendRequestsToBrokers(controllerEpoch: Int) {
// .......
val updateMetadataRequest = {
val liveBrokers = if (updateMetadataRequestVersion == 0) {
// .......
} else {
controllerContext.liveOrShuttingDownBrokers.map { broker =>
val endPoints = broker.endPoints.map { endPoint =>
new UpdateMetadataRequest.EndPoint(endPoint.host, endPoint.port, endPoint.securityProtocol, endPoint.listenerName)
}
new UpdateMetadataRequest.Broker(broker.id, endPoints.asJava, broker.rack.orNull)
}
}
new UpdateMetadataRequest.Builder(updateMetadataRequestVersion, controllerId, controllerEpoch, partitionStates.asJava,
liveBrokers.asJava)
}
updateMetadataRequestBrokerSet.foreach { broker =>
controller.sendRequest(broker, ApiKeys.UPDATE_METADATA, updateMetadataRequest, null)
}
// .......
}

topic创建时更新metadata再进一步的过程

构建发送请求事件放入发送队列等待发送线程发送

构建发送请求事件代码如下:

// ControllerChannelManager
def sendRequest(brokerId: Int, apiKey: ApiKeys, request: AbstractRequest.Builder[_ <: AbstractRequest],
callback: AbstractResponse => Unit = null) {
brokerLock synchronized {
val stateInfoOpt = brokerStateInfo.get(brokerId)
stateInfoOpt match {
case Some(stateInfo) =>
stateInfo.messageQueue.put(QueueItem(apiKey, request, callback))
case None =>
warn("Not sending request %s to broker %d, since it is offline.".format(request, brokerId))
}
}
}

调用栈:

CLASS_NAME METHOD_NAME LINE_NUM
kafka/controller/ControllerChannelManager sendRequest 81
kafka/controller/KafkaController sendRequest 662
kafka/controller/ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2 apply 405
kafka/controller/ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2 apply 405
scala/collection/mutable/HashMap$$anonfun$foreach$1 apply 130
scala/collection/mutable/HashMap$$anonfun$foreach$1 apply 130
scala/collection/mutable/HashTable$class foreachEntry 241
scala/collection/mutable/HashMap foreachEntry 40
scala/collection/mutable/HashMap foreach 130
kafka/controller/ControllerBrokerRequestBatch sendRequestsToBrokers 502
kafka/controller/PartitionStateMachine handleStateChanges 105
kafka/controller/KafkaController onNewPartitionCreation 499
kafka/controller/KafkaController onNewTopicCreation 485
kafka/controller/KafkaController$TopicChange process 1237
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply$mcV$sp 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/metrics/KafkaTimer time 32
kafka/controller/ControllerEventManager$ControllerEventThread doWork 64
kafka/utils/ShutdownableThread run 70

发送线程发送请求:

代码如下:

// ControllerChannelManager.scala class RequestSendThread
override def doWork(): Unit = { def backoff(): Unit = CoreUtils.swallowTrace(Thread.sleep(100)) val QueueItem(apiKey, requestBuilder, callback) = queue.take()
//...
while (isRunning.get() && !isSendSuccessful) {
// if a broker goes down for a long time, then at some point the controller's zookeeper listener will trigger a
// removeBroker which will invoke shutdown() on this thread. At that point, we will stop retrying.
try {
if (!brokerReady()) {
isSendSuccessful = false
backoff()
}
else {
val clientRequest = networkClient.newClientRequest(brokerNode.idString, requestBuilder,
time.milliseconds(), true)
clientResponse = NetworkClientUtils.sendAndReceive(networkClient, clientRequest, time)
isSendSuccessful = true
}
} catch {
case e: Throwable => // if the send was not successful, reconnect to broker and resend the message
warn(("Controller %d epoch %d fails to send request %s to broker %s. " +
"Reconnecting to broker.").format(controllerId, controllerContext.epoch,
requestBuilder.toString, brokerNode.toString), e)
networkClient.close(brokerNode.idString)
isSendSuccessful = false
backoff()
}
}
// ......
}

响应线程

CLASS_NAME METHOD_NAME LINE_NUM
kafka/server/MetadataCache kafka$server$MetadataCache$$addOrUpdatePartitionInfo 150
kafka/utils/CoreUtils$ inLock 219
kafka/utils/CoreUtils$ inWriteLock 225
kafka/server/MetadataCache updateCache 184
kafka/server/ReplicaManager maybeUpdateMetadataCache 988
kafka/server/KafkaApis handleUpdateMetadataRequest 212
kafka/server/KafkaApis handle 142
kafka/server/KafkaRequestHandler run 72

线程信息: kafka-request-handler-5

partitionMetadataLock读写锁控制cache数据的读取与写入的线程安全。元数据信息在发送请求中已经构造好了。此处还涉live broker的更新等。

应该还要补充:leader切换和isr变化等

MetadataCache更新的更多相关文章

  1. kafka-clients 1.0 高阶API消费消息(未完)

    消费消息的请求(按序) org/apache/kafka/common/requests/RequestHeader org/apache/kafka/common/requests/ApiVersi ...

  2. 【原】Android热更新开源项目Tinker源码解析系列之三:so热更新

    本系列将从以下三个方面对Tinker进行源码解析: Android热更新开源项目Tinker源码解析系列之一:Dex热更新 Android热更新开源项目Tinker源码解析系列之二:资源文件热更新 A ...

  3. 使用TSQL查询和更新 JSON 数据

    JSON是一个非常流行的,用于数据交换的文本数据(textual data)格式,主要用于Web和移动应用程序中.JSON 使用“键/值对”(Key:Value pair)存储数据,能够表示嵌套键值对 ...

  4. 【原】Android热更新开源项目Tinker源码解析系列之一:Dex热更新

    [原]Android热更新开源项目Tinker源码解析系列之一:Dex热更新 Tinker是微信的第一个开源项目,主要用于安卓应用bug的热修复和功能的迭代. Tinker github地址:http ...

  5. 【原】Android热更新开源项目Tinker源码解析系列之二:资源文件热更新

    上一篇文章介绍了Dex文件的热更新流程,本文将会分析Tinker中对资源文件的热更新流程. 同Dex,资源文件的热更新同样包括三个部分:资源补丁生成,资源补丁合成及资源补丁加载. 本系列将从以下三个方 ...

  6. Entity Framework 6 Recipes 2nd Edition 译 -> 目录 -持续更新

    因为看了<Entity Framework 6 Recipes 2nd Edition>这本书前面8章的翻译,感谢china_fucan. 从第九章开始,我是边看边译的,没有通读,加之英语 ...

  7. iOS热更新-8种实现方式

    一.JSPatch 热更新时,从服务器拉去js脚本.理论上可以修改和新建所有的模块,但是不建议这样做. 建议 用来做紧急的小需求和 修复严重的线上bug. 二.lua脚本 比如: wax.热更新时,从 ...

  8. 【.net 深呼吸】程序集的热更新

    当一个程序集被加载使用的时候,出于数据的完整性和安全性考虑,程序集文件(在99.9998%的情况下是.dll文件)会被锁定,如果此时你想更新程序集(实际上是替换dll文件),是不可以操作的,这时你得把 ...

  9. ASP.NET MVC5+EF6+EasyUI 后台管理系统(1)-前言与目录(持续更新中...)

    开发工具:VS2015(2012以上)+SQL2008R2以上数据库  您可以有偿获取一份最新源码联系QQ:729994997 价格 666RMB  升级后界面效果如下: 任务调度系统界面 http: ...

随机推荐

  1. Jexl表达式引擎-根据字符串动态执行JAVA.md

    Table of Contents generated with DocToc 一.使用场景 二.市面上表达式引擎比较 2.1 Aviator 2.2 Jexl 一.使用场景 在做某些项目的时候,有时 ...

  2. 搞定 CompletableFuture,并发异步编程和编写串行程序还有什么区别?你们要的多图长文

    你有一个思想,我有一个思想,我们交换后,一个人就有两个思想 If you can NOT explain it simply, you do NOT understand it well enough ...

  3. collection集合常用功能

    java.util.Collection接口 是所有单列集合最顶层的接口,里面定义了所有单列集合的共性方法 1.public boolean add(E e)     添加元素 2.public bo ...

  4. 阿里云的maven仓库

    自从开源中国的maven仓库挂了之后就一直在用国外的仓库,慢得想要砸电脑的心都有了.如果你和我一样受够了国外maven仓库的龟速下载?快试试阿里云提供的maven仓库,从此不在浪费生命…… 仓库地址: ...

  5. SQL之DDL、DML、DCL、TCL

    SQL SQL(structured query language)是一种领域特定语言(DSL,domain-specific language),用于管理关系型数据库(relational data ...

  6. Python File writelines() 方法

    概述 writelines() 方法用于向文件中写入一序列的字符串.高佣联盟 www.cgewang.com 这一序列字符串可以是由迭代对象产生的,如一个字符串列表. 换行需要制定换行符 \n. 语法 ...

  7. 配置ssh互信

    配置基于密钥认证的免密登录 用到的命令: ssh-keygen:创建公钥和密钥,会生成id_rsa和id_rsa.pub两个文件 生成ssh密钥后,密钥将默认存储在家目录下的.ssh/目录中.私钥和公 ...

  8. [转]new一个对象的过程中发生了什么?

    来自:沉默哥 | 公号 :程序员小乐 链接:cnblogs.com/JackPn/p/9386182.html Java在new一个对象的时候,会先查看对象所属的类有没有被加载到内存,如果没有的话,就 ...

  9. ifstream

    eof() 这个东西是返回文件是否达到尾部. 在读取错误的时候才会触发. 这点要小心,如果写在while(eof) 即使到了文件尾部, 但并没有读取错误,很有可能再次进入循环,然后出现读取错误 .ge ...

  10. js跳转界面

    js页面跳转大全 所谓的js页面跳转就是利用javesrcipt对打开的页面ULR进行跳转,如我们打开的是A页面,通过javsrcipt脚本就会跳转到B页面.目前很多垃圾站经常用js跳转将正常页面跳转 ...