【Kafka源码】处理请求
在KafkaServer中的入口在:
apis = new KafkaApis(socketServer.requestChannel, replicaManager, groupCoordinator,
kafkaController, zkUtils, config.brokerId, config, metadataCache, metrics, authorizer)
requestHandlerPool = new KafkaRequestHandlerPool(config.brokerId, socketServer.requestChannel, apis, config.numIoThreads)
首先根据相关参数,实例化KafkaApis,然后实例化KafkaRequestHandlerPool。下面我们首先看下KafkaRequestHandlerPool。
一、KafkaRequestHandlerPool
class KafkaRequestHandlerPool(val brokerId: Int,
val requestChannel: RequestChannel,
val apis: KafkaApis,
numThreads: Int) extends Logging with KafkaMetricsGroup {
/* a meter to track the average free capacity of the request handlers */
private val aggregateIdleMeter = newMeter("RequestHandlerAvgIdlePercent", "percent", TimeUnit.NANOSECONDS)
this.logIdent = "[Kafka Request Handler on Broker " + brokerId + "], "
val threads = new Array[Thread](numThreads)
val runnables = new Array[KafkaRequestHandler](numThreads)
for(i <- 0 until numThreads) {
runnables(i) = new KafkaRequestHandler(i, brokerId, aggregateIdleMeter, numThreads, requestChannel, apis)
threads(i) = Utils.daemonThread("kafka-request-handler-" + i, runnables(i))
threads(i).start()
}
//...
}
主要是启动了numThreads个数的线程,然后线程中执行的内容是KafkaRequestHandler。
/**
* 响应kafka请求的线程
*/
class KafkaRequestHandler(id: Int,
brokerId: Int,
val aggregateIdleMeter: Meter,
val totalHandlerThreads: Int,
val requestChannel: RequestChannel,
apis: KafkaApis) extends Runnable with Logging {
this.logIdent = "[Kafka Request Handler " + id + " on Broker " + brokerId + "], "
def run() {
while(true) {
try {
var req : RequestChannel.Request = null
while (req == null) {
// We use a single meter for aggregate idle percentage for the thread pool.
// Since meter is calculated as total_recorded_value / time_window and
// time_window is independent of the number of threads, each recorded idle
// time should be discounted by # threads.
val startSelectTime = SystemTime.nanoseconds
req = requestChannel.receiveRequest(300)
val idleTime = SystemTime.nanoseconds - startSelectTime
aggregateIdleMeter.mark(idleTime / totalHandlerThreads)
}
if(req eq RequestChannel.AllDone) {
debug("Kafka request handler %d on broker %d received shut down command".format(
id, brokerId))
return
}
req.requestDequeueTimeMs = SystemTime.milliseconds
trace("Kafka request handler %d on broker %d handling request %s".format(id, brokerId, req))
apis.handle(req)//这边是如何处理请求的重点
} catch {
case e: Throwable => error("Exception when handling request", e)
}
}
}
//shutdown。。
}
在run方法中,我们可以看到,主要处理消息的地方是api.handle(req)。下面我们主要看下这块的内容。
二、KafkaApis.handle
直接看代码:
/**
* Top-level method that handles all requests and multiplexes to the right api
*/
def handle(request: RequestChannel.Request) {
try {
trace("Handling request:%s from connection %s;securityProtocol:%s,principal:%s".
format(request.requestDesc(true), request.connectionId, request.securityProtocol, request.session.principal))
ApiKeys.forId(request.requestId) match {//根据requestId,调用不同的方法,处理不同的请求
case ApiKeys.PRODUCE => handleProducerRequest(request)
case ApiKeys.FETCH => handleFetchRequest(request)
case ApiKeys.LIST_OFFSETS => handleOffsetRequest(request)
case ApiKeys.METADATA => handleTopicMetadataRequest(request)
case ApiKeys.LEADER_AND_ISR => handleLeaderAndIsrRequest(request)
case ApiKeys.STOP_REPLICA => handleStopReplicaRequest(request)
case ApiKeys.UPDATE_METADATA_KEY => handleUpdateMetadataRequest(request)
case ApiKeys.CONTROLLED_SHUTDOWN_KEY => handleControlledShutdownRequest(request)
case ApiKeys.OFFSET_COMMIT => handleOffsetCommitRequest(request)
case ApiKeys.OFFSET_FETCH => handleOffsetFetchRequest(request)
case ApiKeys.GROUP_COORDINATOR => handleGroupCoordinatorRequest(request)
case ApiKeys.JOIN_GROUP => handleJoinGroupRequest(request)
case ApiKeys.HEARTBEAT => handleHeartbeatRequest(request)
case ApiKeys.LEAVE_GROUP => handleLeaveGroupRequest(request)
case ApiKeys.SYNC_GROUP => handleSyncGroupRequest(request)
case ApiKeys.DESCRIBE_GROUPS => handleDescribeGroupRequest(request)
case ApiKeys.LIST_GROUPS => handleListGroupsRequest(request)
case ApiKeys.SASL_HANDSHAKE => handleSaslHandshakeRequest(request)
case ApiKeys.API_VERSIONS => handleApiVersionsRequest(request)
case requestId => throw new KafkaException("Unknown api code " + requestId)
}
} catch {
case e: Throwable =>
if (request.requestObj != null) {
request.requestObj.handleError(e, requestChannel, request)
error("Error when handling request %s".format(request.requestObj), e)
} else {
val response = request.body.getErrorResponse(request.header.apiVersion, e)
val respHeader = new ResponseHeader(request.header.correlationId)
/* If request doesn't have a default error response, we just close the connection.
For example, when produce request has acks set to 0 */
if (response == null)
requestChannel.closeConnection(request.processor, request)
else
requestChannel.sendResponse(new Response(request, new ResponseSend(request.connectionId, respHeader, response)))
error("Error when handling request %s".format(request.body), e)
}
} finally
request.apiLocalCompleteTimeMs = SystemTime.milliseconds
}
2.1 ApiKeys枚举类
PRODUCE(0, "Produce"),//生产者消息
FETCH(1, "Fetch"),//消费者获取消息
LIST_OFFSETS(2, "Offsets"),//获取偏移量
METADATA(3, "Metadata"),//获取topic源数据
LEADER_AND_ISR(4, "LeaderAndIsr"),
STOP_REPLICA(5, "StopReplica"),//停止副本复制
UPDATE_METADATA_KEY(6, "UpdateMetadata"),//更新源数据
CONTROLLED_SHUTDOWN_KEY(7, "ControlledShutdown"),//controller停止
OFFSET_COMMIT(8, "OffsetCommit"),//提交offset
OFFSET_FETCH(9, "OffsetFetch"),//获取offset
GROUP_COORDINATOR(10, "GroupCoordinator"),//组协调
JOIN_GROUP(11, "JoinGroup"),//加入组
HEARTBEAT(12, "Heartbeat"),//心跳
LEAVE_GROUP(13, "LeaveGroup"),//离开组
SYNC_GROUP(14, "SyncGroup"),//同步组
DESCRIBE_GROUPS(15, "DescribeGroups"),//描述组
LIST_GROUPS(16, "ListGroups"),//列出组
SASL_HANDSHAKE(17, "SaslHandshake"),//加密握手
API_VERSIONS(18, "ApiVersions");//版本
这块比较简单,主要的是Request的数据结构,还有后续的处理方法。下面我们逐步来分析。
三、Request数据结构
所有的请求,最终都会变成这个RequestChannel.Request。所以我们先看下这个Request。
case class Request(processor: Int, connectionId: String, session: Session, private var buffer: ByteBuffer, startTimeMs: Long, securityProtocol: SecurityProtocol) {
//...
val requestId = buffer.getShort()
private val keyToNameAndDeserializerMap: Map[Short, (ByteBuffer) => RequestOrResponse]=
Map(ApiKeys.FETCH.id -> FetchRequest.readFrom,
ApiKeys.CONTROLLED_SHUTDOWN_KEY.id -> ControlledShutdownRequest.readFrom
)
val requestObj =
keyToNameAndDeserializerMap.get(requestId).map(readFrom => readFrom(buffer)).orNull
val header: RequestHeader =
if (requestObj == null) {
buffer.rewind
try RequestHeader.parse(buffer)
catch {
case ex: Throwable =>
throw new InvalidRequestException(s"Error parsing request header. Our best guess of the apiKey is: $requestId", ex)
}
} else
null
val body: AbstractRequest =
if (requestObj == null)
try {
// For unsupported version of ApiVersionsRequest, create a dummy request to enable an error response to be returned later
if (header.apiKey == ApiKeys.API_VERSIONS.id && !Protocol.apiVersionSupported(header.apiKey, header.apiVersion))
new ApiVersionsRequest
else
AbstractRequest.getRequest(header.apiKey, header.apiVersion, buffer)
} catch {
case ex: Throwable =>
throw new InvalidRequestException(s"Error getting request for apiKey: ${header.apiKey} and apiVersion: ${header.apiVersion}", ex)
}
else
null
buffer = null
private val requestLogger = Logger.getLogger("kafka.request.logger")
def requestDesc(details: Boolean): String = {
if (requestObj != null)
requestObj.describe(details)
else
header.toString + " -- " + body.toString
}
//...
}
主要有几个部分,
- 首先是requestId,是一个short类型的值。
- 然后是header,即消息头,是一个RequestHeader
- 最后是body,是消息的内容,类型为AbstractRequest
3.1 requestId
这个requestId表示的是api的类型,KafkaApis需要根据这个requestId,来判断调用哪个方法处理消息。
3.2 header
我们看下RequestHeader的结构。
private final short apiKey;
private final short apiVersion;
private final String clientId;
private final int correlationId;
主要是四个变量,apiKey,APIVersion,clientId,correlationId。
3.3 body
消息体,对应的类为AbstractRequest。主要的内容是根据版本号和apiKey来解析出消息的具体内容。
public static AbstractRequest getRequest(int requestId, int versionId, ByteBuffer buffer) {
ApiKeys apiKey = ApiKeys.forId(requestId);
switch (apiKey) {
case PRODUCE:
return ProduceRequest.parse(buffer, versionId);
case FETCH:
return FetchRequest.parse(buffer, versionId);
case LIST_OFFSETS:
return ListOffsetRequest.parse(buffer, versionId);
case METADATA:
return MetadataRequest.parse(buffer, versionId);
case OFFSET_COMMIT:
return OffsetCommitRequest.parse(buffer, versionId);
case OFFSET_FETCH:
return OffsetFetchRequest.parse(buffer, versionId);
case GROUP_COORDINATOR:
return GroupCoordinatorRequest.parse(buffer, versionId);
case JOIN_GROUP:
return JoinGroupRequest.parse(buffer, versionId);
case HEARTBEAT:
return HeartbeatRequest.parse(buffer, versionId);
case LEAVE_GROUP:
return LeaveGroupRequest.parse(buffer, versionId);
case SYNC_GROUP:
return SyncGroupRequest.parse(buffer, versionId);
case STOP_REPLICA:
return StopReplicaRequest.parse(buffer, versionId);
case CONTROLLED_SHUTDOWN_KEY:
return ControlledShutdownRequest.parse(buffer, versionId);
case UPDATE_METADATA_KEY:
return UpdateMetadataRequest.parse(buffer, versionId);
case LEADER_AND_ISR:
return LeaderAndIsrRequest.parse(buffer, versionId);
case DESCRIBE_GROUPS:
return DescribeGroupsRequest.parse(buffer, versionId);
case LIST_GROUPS:
return ListGroupsRequest.parse(buffer, versionId);
case SASL_HANDSHAKE:
return SaslHandshakeRequest.parse(buffer, versionId);
case API_VERSIONS:
return ApiVersionsRequest.parse(buffer, versionId);
default:
throw new AssertionError(String.format("ApiKey %s is not currently handled in `getRequest`, the " +
"code should be updated to do so.", apiKey));
}
}
这块的请求类型很多,想要了解具体结构的,可以到每个类中具体看。
【Kafka源码】处理请求的更多相关文章
- kafka源码分析之一server启动分析
0. 关键概念 关键概念 Concepts Function Topic 用于划分Message的逻辑概念,一个Topic可以分布在多个Broker上. Partition 是Kafka中横向扩展和一 ...
- CopyOnWrite 思想在 Kafka 源码中的运用
CopyOnWrite 思想在 Kafka 源码中的运用 在 Kafka 的内核源码中,有这么一个场景,客户端在向 Kafka 写数据的时候,会把消息先写入客户端本地的内存缓冲,然后在内存缓冲里形成一 ...
- Kafka源码分析(二) - 生产者
系列文章目录 https://zhuanlan.zhihu.com/p/367683572 目录 系列文章目录 一. 使用方式 step 1: 设置必要参数 step 2: 创建KafkaProduc ...
- Kafka源码分析系列-目录(收藏不迷路)
持续更新中,敬请关注! 目录 <Kafka源码分析>系列文章计划按"数据传递"的顺序写作,即:先分析生产者,其次分析Server端的数据处理,然后分析消费者,最后再补充 ...
- Kafka源码分析(三) - Server端 - 消息存储
系列文章目录 https://zhuanlan.zhihu.com/p/367683572 目录 系列文章目录 一. 业务模型 1.1 概念梳理 1.2 文件分析 1.2.1 数据目录 1.2.2 . ...
- 第三篇:白话tornado源码之请求来了
上一篇<白话tornado源码之待请求阶段>中介绍了tornado框架在客户端请求之前所做的准备(下图1.2部分),本质上就是创建了一个socket服务端,并进行了IP和端口的绑定,但是未 ...
- Kakfa揭秘 Day3 Kafka源码概述
Kakfa揭秘 Day3 Kafka源码概述 今天开始进入Kafka的源码,本次学习基于最新的0.10.0版本进行.由于之前在学习Spark过程中积累了很多的经验和思想,这些在kafka上是通用的. ...
- Kafka 源码剖析
1.概述 在对Kafka使用层面掌握后,进一步提升分析其源码是极有必要的.纵观Kafka源码工程结构,不算太复杂,代码量也不算大.分析研究其实现细节难度不算太大.今天笔者给大家分析的是其核心处理模块, ...
- apache kafka & CDH kafka源码编译
Apache kafka编译 前言 github网站kafka项目的README.md有关于kafka源码编译的说明 github地址:https://github.com/apache/kafka ...
- gradle 编译kafka源码慢
我前提已经在环境变量中将GRADLE_HOME设置到了gradle的目录(在E盘),并且在环境变量里设置了本地仓库GRADLE_USER_HOME. 编译kafka源码时候,很慢很慢.百度了一下,有说 ...
随机推荐
- Flash与 Javascript 交互
网页加载时立即调用 ExternalInterface.addCallback中定义的函数会失败,放到按键中调用正常. 推测:可能是flash对象加载时间略长,网页加载到js时,flash对象尚未初始 ...
- 想到一个赚钱的APP
通过APP上发布调查问卷的需求,鼓励人们注册,并给与一定的报酬.需求主要面向一些调查问卷,一类的需求发布
- 第八章 关于SQL查询出错的一些问题
问题一:在使用MySQL使用传参查询并返回结果集时,没错,小伙伴们都知道少不了Statement接口和PreparedStatement对象.问题来了,有时竟然查询不了,Debug进去,发现执行的SQ ...
- java8中Stream数据流
筛选重复的元素 Stream 接口支持 distinct 的方法, 它会返回一个元素(根据流所生成元素的 hashCode和equals方法实现)的流. 例如,以下代码会筛选出列表中所有的偶数,并确保 ...
- 在 Tomcat 8 部署多端口项目
一般的部署途径 Tomcat 的部署途径很多,一般有如下几种: 直接将 War 包拷贝到 webapps 目录中,然后启动 Tomcat. 登陆 Tomcat 管理控制台http://localhos ...
- win10 uwp 自定义控件初始化
我遇到一个问题,我在 xaml 用了我的自定义控件,但是我给他设置了一个值,但是什么时候我才可以获得这个值? 本文告诉大家,从构造函数.loaded.Initialized 的调用过程. 用最简单的方 ...
- 【网络爬虫入门05】分布式文件存储数据库MongoDB的基本操作与爬虫应用
[网络爬虫入门05]分布式文件存储数据库MongoDB的基本操作与爬虫应用 广东职业技术学院 欧浩源 1.引言 网络爬虫往往需要将大量的数据存储到数据库中,常用的有MySQL.MongoDB和Red ...
- 【转】嵌入式C语言调试开关
在调试程序时,经常会用到assert和printf之类的函数,我最近做的这个工程里就有几百个assert,在你自认为程序已经没有bug的时候,就要除去这些调试代码,应为系统在正常运行时这些用于调试的信 ...
- 入侵必练的CMD命令
入侵必练的CMD命令 我们都知道和目标主机建立IPC$连接后,要把后门,木马之类的软件传过去,其实这个命令是DOS基础的 命令,我这里就写个格式. 一.呵呵,命令一写就知道了吧,在网上看的太多了,其他 ...
- 原生javascript跨浏览器常用事件处理
var eventUntil = { getEvent: function (event) {//获取事件 return event ? eve ...