MetadataCache什么时候更新

updateCache方法用来更新缓存的。

发起线程 controller-event-thread

controller选举的时候

CLASS_NAME METHOD_NAME LINE_NUM
kafka/controller/KafkaController sendUpdateMetadataRequest 1043
kafka/controller/KafkaController onControllerFailover 288
kafka/controller/KafkaController elect 1658
kafka/controller/KafkaController$Startup$ process 1581
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply$mcV$sp 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/metrics/KafkaTimer time 32
kafka/controller/ControllerEventManager$ControllerEventThread doWork 64
kafka/utils/ShutdownableThread run 70

启动的时候选举,启动这个动作也是个事件


  1. // KafkaController.scala
  2. case object Startup extends ControllerEvent {
  3. def state = ControllerState.ControllerChange
  4. override def process(): Unit = {
  5. registerSessionExpirationListener()
  6. registerControllerChangeListener()
  7. elect()
  8. }
  9. }

broker启动的时候

CLASS_NAME METHOD_NAME LINE_NUM
kafka/controller/KafkaController sendUpdateMetadataRequest 1043
kafka/controller/KafkaController onBrokerStartup 387
kafka/controller/KafkaController$BrokerChange process 1208
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply$mcV$sp 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/metrics/KafkaTimer time 32
kafka/controller/ControllerEventManager$ControllerEventThread doWork 64
kafka/utils/ShutdownableThread run 70

topic删除的时候

CLASS_NAME METHOD_NAME LINE_NUM
kafka/controller/KafkaController sendUpdateMetadataRequest 1043
kafka/controller/TopicDeletionManager kafka$controller$TopicDeletionManager$$onTopicDeletion 268
kafka/controller/TopicDeletionManager$$anonfun$resumeDeletions$2 apply 333
kafka/controller/TopicDeletionManager$$anonfun$resumeDeletions$2 apply 333
scala/collection/immutable/Set$Set1 foreach 94
kafka/controller/TopicDeletionManager resumeDeletions 333
kafka/controller/TopicDeletionManager enqueueTopicsForDeletion 110
kafka/controller/KafkaController$TopicDeletion process 1280
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply$mcV$sp 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/metrics/KafkaTimer time 32
kafka/controller/ControllerEventManager$ControllerEventThread doWork 64
kafka/utils/ShutdownableThread run 70

topic创建或者修改的时候

CLASS_NAME METHOD_NAME LINE_NUM
kafka/controller/ControllerBrokerRequestBatch updateMetadataRequestBrokerSet 291
kafka/controller/ControllerBrokerRequestBatch newBatch 294
kafka/controller/PartitionStateMachine handleStateChanges 105
kafka/controller/KafkaController onNewPartitionCreation 499
kafka/controller/KafkaController onNewTopicCreation 485
kafka/controller/KafkaController$TopicChange process 1237
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply$mcV$sp 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/metrics/KafkaTimer time 32
kafka/controller/ControllerEventManager$ControllerEventThread doWork 64
kafka/utils/ShutdownableThread run 70

topic创建这个是从队列中拿到事件再处理的方式

队列是kafka.controller.ControllerEventManager.queue

放入过程如下,本质还是监听zk的path的child的变化:

CLASS_NAME METHOD_NAME LINE_NUM
kafka/controller/ControllerEventManager put 44
kafka/controller/TopicChangeListener handleChildChange 1712
org/I0Itec/zkclient/ZkClient$10 run 848
org/I0Itec/zkclient/ZkEventThread run 85

注册监听器的代码如下:

  1. // class KafkaController
  2. private def registerTopicChangeListener() = {
  3. zkUtils.subscribeChildChanges(BrokerTopicsPath, topicChangeListener)
  4. }

顺带说一下有6个地方订阅了zk的子节点的变化:

  • DynamicConfigManager.startup
  • registerTopicChangeListener
  • registerIsrChangeNotificationListener
  • registerTopicDeletionListener
  • registerBrokerChangeListener
  • registerLogDirEventNotificationListener

处理创建topic事件:

  1. // ControllerChannelManager.scala class ControllerBrokerRequestBatch
  2. def sendRequestsToBrokers(controllerEpoch: Int) {
  3. // .......
  4. val updateMetadataRequest = {
  5. val liveBrokers = if (updateMetadataRequestVersion == 0) {
  6. // .......
  7. } else {
  8. controllerContext.liveOrShuttingDownBrokers.map { broker =>
  9. val endPoints = broker.endPoints.map { endPoint =>
  10. new UpdateMetadataRequest.EndPoint(endPoint.host, endPoint.port, endPoint.securityProtocol, endPoint.listenerName)
  11. }
  12. new UpdateMetadataRequest.Broker(broker.id, endPoints.asJava, broker.rack.orNull)
  13. }
  14. }
  15. new UpdateMetadataRequest.Builder(updateMetadataRequestVersion, controllerId, controllerEpoch, partitionStates.asJava,
  16. liveBrokers.asJava)
  17. }
  18. updateMetadataRequestBrokerSet.foreach { broker =>
  19. controller.sendRequest(broker, ApiKeys.UPDATE_METADATA, updateMetadataRequest, null)
  20. }
  21. // .......
  22. }

topic创建时更新metadata再进一步的过程

构建发送请求事件放入发送队列等待发送线程发送

构建发送请求事件代码如下:

  1. // ControllerChannelManager
  2. def sendRequest(brokerId: Int, apiKey: ApiKeys, request: AbstractRequest.Builder[_ <: AbstractRequest],
  3. callback: AbstractResponse => Unit = null) {
  4. brokerLock synchronized {
  5. val stateInfoOpt = brokerStateInfo.get(brokerId)
  6. stateInfoOpt match {
  7. case Some(stateInfo) =>
  8. stateInfo.messageQueue.put(QueueItem(apiKey, request, callback))
  9. case None =>
  10. warn("Not sending request %s to broker %d, since it is offline.".format(request, brokerId))
  11. }
  12. }
  13. }

调用栈:

CLASS_NAME METHOD_NAME LINE_NUM
kafka/controller/ControllerChannelManager sendRequest 81
kafka/controller/KafkaController sendRequest 662
kafka/controller/ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2 apply 405
kafka/controller/ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2 apply 405
scala/collection/mutable/HashMap$$anonfun$foreach$1 apply 130
scala/collection/mutable/HashMap$$anonfun$foreach$1 apply 130
scala/collection/mutable/HashTable$class foreachEntry 241
scala/collection/mutable/HashMap foreachEntry 40
scala/collection/mutable/HashMap foreach 130
kafka/controller/ControllerBrokerRequestBatch sendRequestsToBrokers 502
kafka/controller/PartitionStateMachine handleStateChanges 105
kafka/controller/KafkaController onNewPartitionCreation 499
kafka/controller/KafkaController onNewTopicCreation 485
kafka/controller/KafkaController$TopicChange process 1237
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply$mcV$sp 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 apply 53
kafka/metrics/KafkaTimer time 32
kafka/controller/ControllerEventManager$ControllerEventThread doWork 64
kafka/utils/ShutdownableThread run 70

发送线程发送请求:

代码如下:

  1. // ControllerChannelManager.scala class RequestSendThread
  2. override def doWork(): Unit = {
  3. def backoff(): Unit = CoreUtils.swallowTrace(Thread.sleep(100))
  4. val QueueItem(apiKey, requestBuilder, callback) = queue.take()
  5. //...
  6. while (isRunning.get() && !isSendSuccessful) {
  7. // if a broker goes down for a long time, then at some point the controller's zookeeper listener will trigger a
  8. // removeBroker which will invoke shutdown() on this thread. At that point, we will stop retrying.
  9. try {
  10. if (!brokerReady()) {
  11. isSendSuccessful = false
  12. backoff()
  13. }
  14. else {
  15. val clientRequest = networkClient.newClientRequest(brokerNode.idString, requestBuilder,
  16. time.milliseconds(), true)
  17. clientResponse = NetworkClientUtils.sendAndReceive(networkClient, clientRequest, time)
  18. isSendSuccessful = true
  19. }
  20. } catch {
  21. case e: Throwable => // if the send was not successful, reconnect to broker and resend the message
  22. warn(("Controller %d epoch %d fails to send request %s to broker %s. " +
  23. "Reconnecting to broker.").format(controllerId, controllerContext.epoch,
  24. requestBuilder.toString, brokerNode.toString), e)
  25. networkClient.close(brokerNode.idString)
  26. isSendSuccessful = false
  27. backoff()
  28. }
  29. }
  30. // ......
  31. }

响应线程

CLASS_NAME METHOD_NAME LINE_NUM
kafka/server/MetadataCache kafka$server$MetadataCache$$addOrUpdatePartitionInfo 150
kafka/utils/CoreUtils$ inLock 219
kafka/utils/CoreUtils$ inWriteLock 225
kafka/server/MetadataCache updateCache 184
kafka/server/ReplicaManager maybeUpdateMetadataCache 988
kafka/server/KafkaApis handleUpdateMetadataRequest 212
kafka/server/KafkaApis handle 142
kafka/server/KafkaRequestHandler run 72

线程信息: kafka-request-handler-5

partitionMetadataLock读写锁控制cache数据的读取与写入的线程安全。元数据信息在发送请求中已经构造好了。此处还涉live broker的更新等。

应该还要补充:leader切换和isr变化等

MetadataCache更新的更多相关文章

  1. kafka-clients 1.0 高阶API消费消息(未完)

    消费消息的请求(按序) org/apache/kafka/common/requests/RequestHeader org/apache/kafka/common/requests/ApiVersi ...

  2. 【原】Android热更新开源项目Tinker源码解析系列之三:so热更新

    本系列将从以下三个方面对Tinker进行源码解析: Android热更新开源项目Tinker源码解析系列之一:Dex热更新 Android热更新开源项目Tinker源码解析系列之二:资源文件热更新 A ...

  3. 使用TSQL查询和更新 JSON 数据

    JSON是一个非常流行的,用于数据交换的文本数据(textual data)格式,主要用于Web和移动应用程序中.JSON 使用“键/值对”(Key:Value pair)存储数据,能够表示嵌套键值对 ...

  4. 【原】Android热更新开源项目Tinker源码解析系列之一:Dex热更新

    [原]Android热更新开源项目Tinker源码解析系列之一:Dex热更新 Tinker是微信的第一个开源项目,主要用于安卓应用bug的热修复和功能的迭代. Tinker github地址:http ...

  5. 【原】Android热更新开源项目Tinker源码解析系列之二:资源文件热更新

    上一篇文章介绍了Dex文件的热更新流程,本文将会分析Tinker中对资源文件的热更新流程. 同Dex,资源文件的热更新同样包括三个部分:资源补丁生成,资源补丁合成及资源补丁加载. 本系列将从以下三个方 ...

  6. Entity Framework 6 Recipes 2nd Edition 译 -> 目录 -持续更新

    因为看了<Entity Framework 6 Recipes 2nd Edition>这本书前面8章的翻译,感谢china_fucan. 从第九章开始,我是边看边译的,没有通读,加之英语 ...

  7. iOS热更新-8种实现方式

    一.JSPatch 热更新时,从服务器拉去js脚本.理论上可以修改和新建所有的模块,但是不建议这样做. 建议 用来做紧急的小需求和 修复严重的线上bug. 二.lua脚本 比如: wax.热更新时,从 ...

  8. 【.net 深呼吸】程序集的热更新

    当一个程序集被加载使用的时候,出于数据的完整性和安全性考虑,程序集文件(在99.9998%的情况下是.dll文件)会被锁定,如果此时你想更新程序集(实际上是替换dll文件),是不可以操作的,这时你得把 ...

  9. ASP.NET MVC5+EF6+EasyUI 后台管理系统(1)-前言与目录(持续更新中...)

    开发工具:VS2015(2012以上)+SQL2008R2以上数据库  您可以有偿获取一份最新源码联系QQ:729994997 价格 666RMB  升级后界面效果如下: 任务调度系统界面 http: ...

随机推荐

  1. K8s-Pod健康检查原理与实践

    Pod健康检查介绍 默认情况下,kubelet根据容器运行状态作为健康依据,不能监视容器中应用程序状态,例如程序假死.这将会导致无法提供服务,丢失流量.因此重新健康检查机制确保容器健康幸存.Pod通过 ...

  2. 利用CloudFlare自动DDNS

    注意要 仅限 DNS 获取咱的Key https://dash.cloudflare.com/profile 先在控制面板找到咱的目前IP,然后到Cloudflare中新建一个A记录,如:ddns.y ...

  3. vue传参方式

    //query传参,使用name跳转   this.$router.push({       name:'second',       query: {         queryId:'201808 ...

  4. java enum 枚举类

    图一代码: public enum LogMethodEnum { WEBCSCARDVALID("返回值"), WEBCSVERIFYPASSWORD("返回值&quo ...

  5. Spring Boot+MyBatis+MySQL读写分离

    读写分离要做的事情就是对于一条sql语句该选择去哪个数据库执行,至于谁来做选择数据库的事情,无非两个,1:中间件(比如MyCat):二:程序自己去做分离操作. 但是从程序成眠去做读写分离最大的弱点就是 ...

  6. JVM系列之:Contend注解和false-sharing

    目录 简介 false-sharing的由来 怎么解决? 使用JOL分析 Contended在JDK9中的问题 padded和unpadded性能对比 Contended在JDK中的使用 总结 简介 ...

  7. 使用 flask 构建我的 wooyun 漏洞知识库

    前言 最近在学 flask,一段时间没看,又忘得差不多了,于是弄这个来巩固一下基础知识 漏洞总共包括了 88820 个, Drops 文章总共有 1235 篇,全来自公开数据,在 Github 上收集 ...

  8. ELasticSearch(五)ES集群原理与搭建

    一.ES集群原理 查看集群健康状况:URL+ /GET _cat/health (1).ES基本概念名词 Cluster 代表一个集群,集群中有多个节点,其中有一个为主节点,这个主节点是可以通过选举产 ...

  9. Azure Load Balancer(一) 为我们的Web项目提供负载均衡

    一,引言 上节,我们讲到使用 Azure Traffic Manager 为我们的 Demo 项目提供负载均衡,以及流量转发的功能.但是Azure 提供类似的功能的服务远远不止这一个,今天我们就来讲一 ...

  10. Android个人中心UI

    参考:https://blog.csdn.net/gjm15881133824/article/details/73742219