一 kafka本地目录结构

[root@hadoop ~]# cd /tmp/kafka-logs1

[root@hadoop kafka-logs1]# find .
.
./.lock
./recovery-point-offset-checkpoint
./log-start-offset-checkpoint
./cleaner-offset-checkpoint
./replication-offset-checkpoint
./meta.properties
./mytest-1
./mytest-1/leader-epoch-checkpoint
./mytest-1/00000000000000000000.log
./mytest-1/00000000000000000000.index
./mytest-1/00000000000000000000.timeindex
./mytest-0
./mytest-0/leader-epoch-checkpoint
./mytest-0/00000000000000000000.log
./mytest-0/00000000000000000000.index
./mytest-0/00000000000000000000.timeindex

搭建单节点多broker的kafka后,启动zk和kafka。

[root@hadoop ~]# cd /usr/local/kafka
[root@hadoop kafka]# zookeeper-server-start.sh config/zookeeper.properties
...
[root@hadoop kafka]# kafka-server-start.sh config/server0.properties &
...
[root@hadoop kafka]# kafka-server-start.sh config/server1.properties &
...
[root@hadoop kafka]# kafka-server-start.sh config/server2.properties &
... [root@hadoop ~]# jps
QuorumPeerMain
Kafka
Kafka
Kafka
Jps

创建kafka集群时我已经创建了一个主题test02,现在我们再创建一个主题mytest(2个分区)

[root@hadoop ~]# cd /usr/local/kafka
[root@hadoop kafka]# kafka-topics.sh --create --zookeeper localhost: --replication-factor --partitions --topic mytest
Created topic "mytest".

查看日志目录可以发现,3个日志目录几乎是一致的(__consumer_offsets-0是个什么鬼?)

test02和mytest副本数均是3,所以3个broker的log下面均有;
test02只有一个分区,mytest有2个分区,所以有一个test02目录,和2个mytest目录。

[root@hadoop ~]# ll /tmp/kafka-logs0
总用量
-rw-r--r-- root root 8月 : cleaner-offset-checkpoint
drwxr-xr-x root root 8月 : __consumer_offsets-
-rw-r--r-- root root 8月 : log-start-offset-checkpoint
-rw-r--r-- root root 8月 : meta.properties
drwxr-xr-x root root 8月 : mytest- #命名格式:主题-分区
drwxr-xr-x root root 8月 : mytest-
-rw-r--r-- root root 8月 : recovery-point-offset-checkpoint
-rw-r--r-- root root 8月 : replication-offset-checkpoint
drwxr-xr-x root root 8月 : test02- [root@hadoop ~]# ll /tmp/kafka-logs1
总用量
-rw-r--r-- root root 7月 : cleaner-offset-checkpoint
-rw-r--r-- root root 8月 : log-start-offset-checkpoint
-rw-r--r-- root root 7月 : meta.properties
drwxr-xr-x root root 8月 : mytest-
drwxr-xr-x root root 8月 : mytest-
-rw-r--r-- root root 8月 : recovery-point-offset-checkpoint
-rw-r--r-- root root 8月 : replication-offset-checkpoint
drwxr-xr-x root root 8月 : test02- [root@hadoop ~]# ll /tmp/kafka-logs2
总用量
-rw-r--r-- root root 7月 : cleaner-offset-checkpoint
-rw-r--r-- root root 8月 : log-start-offset-checkpoint
-rw-r--r-- root root 7月 : meta.properties
drwxr-xr-x root root 8月 : mytest-
drwxr-xr-x root root 8月 : mytest-
-rw-r--r-- root root 8月 : recovery-point-offset-checkpoint
-rw-r--r-- root root 8月 : replication-offset-checkpoint
drwxr-xr-x root root 8月 : test02-

查看主题目录

[root@hadoop ~]# ll /tmp/kafka-logs0/test02-/
总用量
-rw-r--r-- root root 8月 : .index
-rw-r--r-- root root 8月 : .log
-rw-r--r-- root root 8月 : .timeindex
-rw-r--r-- root root 8月 : .snapshot
-rw-r--r-- root root 8月 : leader-epoch-checkpoint

查看元数据信息

[root@hadoop ~]# cat /tmp/kafka-logs0/meta.properties
version=
broker.id=
[root@hadoop ~]# cat /tmp/kafka-logs1/meta.properties
version=
broker.id=
[root@hadoop ~]# cat /tmp/kafka-logs2/meta.properties
version=
broker.id=

向 mytest主题 生产消息并消费消息,可以看出消息分别保存在了不同分区的log里

#生产消息
[root@hadoop kafka]# kafka-console-producer.sh --broker-list localhost:,localhost:,localhost: --topic mytest
>hello kafka
>hello world #消费消息
[root@hadoop kafka]# kafka-console-consumer.sh --bootstrap-server localhost:,localhost:,localhost: --topic mytest --from-beginning
hello kafka
hello world #可以看出,消息分别保存在了不同分区的log里
[root@hadoop ~]# cat /tmp/kafka-logs0/mytest-/.log
C}_Me Ye Yÿÿÿÿÿÿÿÿÿÿÿÿÿÿ"hello kafka[root@hadoop ~]#
[root@hadoop ~]# cat /tmp/kafka-logs0/mytest-/.log
e ꛁe ꞿÿÿÿÿÿÿÿÿÿÿÿÿÿ"hello world[root@hadoop ~]#

二 kafka在zk上的znode

1./controller                //data = {"version":1,"brokerid":0,"timestamp":"1533396512695"}

2./controller_epoch     //data=27#不清楚什么意思,貌似第一次启动时是1,难道是kafka启动过的次数?

3./brokers/ids             //实时维护active的broker
   /brokers/ids/0          //data = {"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints": ["PLAINTEXT://hadoop:9092"],                                              //"jmx_port":-1,"host":"hadoop","timestamp":"1533399158574","port":9092,"version":4}
  /brokers/ids/1
  /brokers/ids/2

/brokers/topics
  /brokers/topics/mytest/partitions/0/state     //data = {"controller_epoch":28,"leader":0,"version":1,"leader_epoch":0,"isr":[0,2,1]}
  /brokers/topics/mytest/partitions/1/state     //data = {"controller_epoch":28,"leader":1,"version":1,"leader_epoch":0,"isr":[1,0,2]}

/brokers/seqid

4./admin/delete_topics

5./isr_change_notification

6./consumers

7./config
   /config/changes
  /config/clients
  /config/brokers
  /config/topics
  /config/users

注意:productor不在zk注册

启动zk客户端

[root@hadoop ~]# cd /usr/local/kafka
[root@hadoop kafka]# zkCli.sh -server hadoop: #启动zk客户端
...

查看根目录

[zk: hadoop:(CONNECTED) ] ls / #查看znode
[cluster, controller_epoch, controller, brokers, zookeeper, admin, isr_change_notification, consumers,
log_dir_event_notification, latest_producer_id_block, config]

查看/controller

[zk: hadoop:(CONNECTED) ] ls /controller
[]
[zk: hadoop:(CONNECTED) ] get /controller
#这里的brokerid为0意思是kafka集群的leader为0
#如果集群中有多个broker,将leader杀死后会发现这里的brokerid变化。
{"version":,"brokerid":,"timestamp":""}
cZxid = 0x513
ctime = Sat Aug :: CST
mZxid = 0x513
mtime = Sat Aug :: CST
pZxid = 0x513
cversion =
dataVersion =
aclVersion =
ephemeralOwner = 0x10000711d710001
dataLength =
numChildren =

查看/controller_epoch

[zk: hadoop:(CONNECTED) ] ls /controller_epoch
[]
[zk: hadoop:(CONNECTED) ] get /controller_epoch cZxid = 0x1c
ctime = Tue Jul :: CST
mZxid = 0x514
mtime = Sat Aug :: CST
pZxid = 0x1c
cversion =
dataVersion =
aclVersion =
ephemeralOwner = 0x0
dataLength =
numChildren =

查看/brokers

[zk: hadoop:(CONNECTED) ] get /brokers
null
cZxid = 0x4
ctime = Tue Jul :: CST
mZxid = 0x4
mtime = Tue Jul :: CST
pZxid = 0xd
cversion =
dataVersion =
aclVersion =
ephemeralOwner = 0x0
dataLength =
numChildren = #3个孩子分别是ids, topics, seqid [zk: hadoop:(CONNECTED) ] ls /brokers
[ids, topics, seqid]

查看/brokers/ids

[zk: hadoop:(CONNECTED) ] ls /brokers/ids
#显示kafka集群中的所有active的brokerid
#如果杀死broker ,这里将会显示[, ]
[, , ] [zk: hadoop:(CONNECTED) 8] get /brokers/ids/
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://hadoop:9092"],
"jmx_port":-,"host":"hadoop","timestamp":"","port":,"version":}
cZxid = 0x55f
ctime = Sun Aug :: CST
mZxid = 0x55f
mtime = Sun Aug :: CST
pZxid = 0x55f
cversion =
dataVersion =
aclVersion =
ephemeralOwner = 0x10000711d710005
dataLength =
numChildren =

查看/brokers/topics

[zk: hadoop:(CONNECTED) 9] ls /brokers/topics
[mytest, test02, __consumer_offsets] [zk: hadoop:(CONNECTED) ] ls /brokers/topics/mytest
[partitions]
[zk: hadoop:(CONNECTED) ] ls /brokers/topics/mytest/partitions #显示分区个数:mytest主题有2个分区,分别为0和1
[, ]
[zk: hadoop:(CONNECTED) 12] ls /brokers/topics/mytest/partitions/
[state]
[zk: hadoop:(CONNECTED) 13] ls /brokers/topics/mytest/partitions//state
[] #以下可以看出每个分区拥有不同的leader
[zk: hadoop:(CONNECTED) 14] get /brokers/topics/mytest/partitions//state
{"controller_epoch":,"leader":,"version":,"leader_epoch":,"isr":[,,]}
cZxid = 0x5a2
ctime = Sun Aug :: CST
mZxid = 0x5a2
mtime = Sun Aug :: CST
pZxid = 0x5a2
cversion =
dataVersion =
aclVersion =
ephemeralOwner = 0x0
dataLength =
numChildren =
[zk: hadoop:(CONNECTED) ] get /brokers/topics/mytest/partitions//state
{"controller_epoch":,"leader":,"version":,"leader_epoch":,"isr":[,,]}
cZxid = 0x5a1
ctime = Sun Aug :: CST
mZxid = 0x5a1
mtime = Sun Aug :: CST
pZxid = 0x5a1
cversion =
dataVersion =
aclVersion =
ephemeralOwner = 0x0
dataLength =
numChildren =

查看/brokers/seqid

[zk: hadoop:(CONNECTED) 16] ls /brokers/seqid
[]
[zk: hadoop:(CONNECTED) ] get /brokers/seqid
null
cZxid = 0xd
ctime = Tue Jul :: CST
mZxid = 0xd
mtime = Tue Jul :: CST
pZxid = 0xd
cversion =
dataVersion =
aclVersion =
ephemeralOwner = 0x0
dataLength =
numChildren =

查看/admin/delete_topics

[zk: hadoop:(CONNECTED) ] ls /admin
[delete_topics]
[zk: hadoop:(CONNECTED) ] ls /admin/delete_topics
[]

查看/isr_change_notification

[zk: hadoop:(CONNECTED) 20] ls /isr_change_notification
[]
[zk: hadoop:(CONNECTED) ] get /isr_change_notification
null
cZxid = 0xe
ctime = Tue Jul :: CST
mZxid = 0xe
mtime = Tue Jul :: CST
pZxid = 0x544
cversion =
dataVersion =
aclVersion =
ephemeralOwner = 0x0
dataLength =
numChildren =

查看/consumers

#不知为何,我这里一直显示为空,按理说启动消费者之后这里应该显示相应信息的
[zk: hadoop:(CONNECTED) ] ls /consumers
[]
[zk: hadoop:(CONNECTED) 23] get /consumers
null
cZxid = 0x2
ctime = Tue Jul :: CST
mZxid = 0x2
mtime = Tue Jul :: CST
pZxid = 0x2
cversion =
dataVersion =
aclVersion =
ephemeralOwner = 0x0
dataLength =
numChildren =

查看/config

[zk: hadoop:(CONNECTED) 24] ls /config
[changes, clients, brokers, topics, users]
[zk: hadoop:(CONNECTED) 25] ls /config/changes
[]
[zk: hadoop:(CONNECTED) 26] ls /config/clients
[]
[zk: hadoop:(CONNECTED) ] ls /config/brokers
[]
[zk: hadoop:(CONNECTED) ] ls /config/topics #与ls /brokers/topics结果一致
[mytest, test02, __consumer_offsets]
[zk: hadoop:(CONNECTED) ] ls /config/users
[]

尝试杀死集群的leader后,查看相对应的znode变化

#.杀死leader
[root@hadoop ~]# ps -ef|grep server0.properties #找到server0的进程号为4791
root 8月04 pts/ :: ...信息太多,忽略.../server0.properties
root : pts/ :: grep --color=auto server0.properties
[root@hadoop ~]# kill - #杀死进程
[root@hadoop ~]# ps -ef|grep server0.properties #再次查看
root : pts/ :: grep --color=auto server0.properties #.查看/controller:brokerid由0变为1
[zk: hadoop:(CONNECTED) 30] get /controller
{"version":,"brokerid":,"timestamp":""}
cZxid = 0x54c
ctime = Sun Aug :: CST
mZxid = 0x54c
mtime = Sun Aug :: CST
pZxid = 0x54c
cversion =
dataVersion =
aclVersion =
ephemeralOwner = 0x10000711d710002
dataLength =
numChildren = #.查看/brokers/ids:active的broker只剩下1和2了
[zk: hadoop:(CONNECTED) ] ls /brokers/ids
[, ] #.重新启动broker
[root@hadoop kafka]# kafka-server-start.sh config/server0.properties &

尝试删除一个主题test02,查看相对应的znode变化

#.删除主题
[root@hadoop ~]# cd /usr/local/kafka
[root@hadoop kafka]# kafka-topics.sh --zookeeper localhost: --delete --topic test02
Topic test02 is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true. #.查看主题:没有结果(应该有结果?)
[root@hadoop kafka]# kafka-topics.sh --describe --zookeeper localhost: --topic test02 #.查看log目录下test02-0目录是否存在:不存在了
[root@hadoop kafka]# ll /tmp/kafka-logs0 #test02目录没有了
总用量
-rw-r--r-- root root 8月 : cleaner-offset-checkpoint
drwxr-xr-x root root 8月 : __consumer_offsets-
...
drwxr-xr-x root root 8月 : __consumer_offsets-
-rw-r--r-- root root 8月 : log-start-offset-checkpoint
-rw-r--r-- root root 8月 : meta.properties
drwxr-xr-x root root 8月 : mytest-
drwxr-xr-x root root 8月 : mytest-
-rw-r--r-- root root 8月 : recovery-point-offset-checkpoint
-rw-r--r-- root root 8月 : replication-offset-checkpoint #.查看/admin/delete_topics:为空。按理说这里应该显示删除的主题test02
[zk: hadoop:(CONNECTED) 32] ls /admin/delete_topics
[] #.查看/brokers/topics:这里显示test02确实被删除了
[zk: hadoop:(CONNECTED) 33] ls /brokers/topics
[mytest, __consumer_offsets] #.查看/config/topics:
[zk: hadoop:(CONNECTED) 34] ls /config/topics
[mytest, __consumer_offsets] #以上均是我的实际操作结果,与老师演示的有一些出入,暂时无法解释原因

kafka3 本地目录结构以及在在zk上的znode的更多相关文章

  1. svn本地目录结构for window

    演示内容: 使用svn目录结构来进行备份正式版和修复版本,最终合并修复版本.主干上的版本. 使用工具: visualSVN server 服务器软件工具 TortoiseSVN客户端工具 1.服务器的 ...

  2. Twitter Storm源代码分析之Nimbus/Supervisor本地目录结构

    storm集群里面工作机器分为两种一种是nimbus, 一种是supervisor, 他们通过zookeeper来进行交互,nimbus通过zookeeper来发布一些指令,supervisor去读z ...

  3. 浅谈android中的目录结构

    之前在android游戏开发中就遇到本地数据存储的问题:一般情形之下就将动态数据写入SD中存储,在没有SD卡的手机上就需另作处理了;再有在开发android应用的过程中,总要去调试APP,安装时又想去 ...

  4. Linux文件系统的目录结构

    Linux下的文件系统为树形结构,入口为/ 树形结构下的文件目录: 无论哪个版本的Linux系统,都有这些目录,这些目录应该是标准的.各个Linux发行版本会存在一些小小的差异,但总体来说,还是大体差 ...

  5. Linux 文件系统的目录结构

    http://www.jb51.net/LINUXjishu/151820.htmlLinux下的文件系统为树形结构,入口为/ 树形结构下的文件目录: 无论哪个版本的Linux系统,都有这些目录,这些 ...

  6. chromiun 学习《二》 目录结构 +启动流程

    1.chromium的目录结构. 2.先上分析图一张.主要是从BrowserMain进程进行分析的.

  7. thinkphp学习笔记1—目录结构和命名规则

    原文:thinkphp学习笔记1-目录结构和命名规则 最近开始学习thinkphp,在下不才,很多的问题看不明白所以想拿出来,恕我大胆发在首页上,希望看到的人能为我答疑解惑,这样大家有个互动,学起来快 ...

  8. CentOS版本区别及 Linux目录结构及其详解

    CentOS 7.0体验与之前版本的不同http://www.linuxidc.com/Linux/2014-07/104196.htm CentOS版本选择http://www.centoscn.c ...

  9. mybatis学习笔记(六)使用generator生成mybatis基础配置代码和目录结构

    原文:http://blog.csdn.net/oh_mourinho/article/details/51463413 创建maven项目 <span style="font-siz ...

随机推荐

  1. 网络编程 -- RPC实现原理 -- NIO多线程 -- 迭代版本V1

    网络编程 -- RPC实现原理 -- 目录 啦啦啦 V1——设置标识变量selectionKey.attach(true);只处理一次(会一直循环遍历selectionKeys,占用CPU资源). ( ...

  2. 【代码审计】五指CMS_v4.1.0 copyfrom.php 页面存在SQL注入漏洞分析

      0x00 环境准备 五指CMS官网:https://www.wuzhicms.com/ 网站源码版本:五指CMS v4.1.0 UTF-8 开源版 程序源码下载:https://www.wuzhi ...

  3. [Android] 基于 Linux 命令行构建 Android 应用(六):Android 应用签名

    Android 要求所有应用在安装前必须使用证书进行数字签名.Android 使用该证书来确定一个应用以及其作者身份,该证书不要求由证书发行机构颁发,因此 Android 应用经常使用自我签名的证书, ...

  4. C# windows程序应用与JavaScript 程序交互实现例子

    C# windows程序应用与JavaScript 程序交互实现例子 最近项目中又遇到WinForm窗体内嵌入浏览器(webBrowser)的情况,而且涉及到C#与JavaScript的相互交互问题, ...

  5. 终于等到你,最强 IDE Visual Studio 2017 正式版发布

    Visual Studio 2017 正式版发布,该版本不仅添加了实时单元测试.实时架构依赖关系验证等新特性,还对许多实用功能进行了改进,如代码导航.IntelliSense.重构.代码修复和调试等等 ...

  6. fyzcms---相关文章推荐功能

    在用我写的fyzcms的做优化的时候,考虑到文章之间的权重集中,以及相互文章间的低耦合,所以设计了一个相关文章推荐的功能. 具体使用: <if condition="count($fi ...

  7. mui---子页面主动调用父页面的方法

    我们在做APP的时候,很多时候会有这样的功能需求,例如:登录,充值,如果登录成功,或充值成功后,需要更改当前页面以及父页面的状态信息,就会用到在子页面调用父页面的方法来实现:在子页面刷新父页面的功能. ...

  8. 在Web根目录下建立testdb.php文件内容

    apache_2.0.50-win32-x86-no_ssl.msi php-5.0.0-Win32.zipmysql-4.0.20d-win.zipphpMyAdmin-2.5.7.zip 操作系统 ...

  9. easyui---表单验证

    <%@ page language="java" import="java.util.*" pageEncoding="UTF-8"% ...

  10. SQL多结果集导出Excel

    由于本项目工作中需要,有时会导出一些数据给客户,但又不是每次都需要,可能这次用了下次可能就不会使用,导出数据,我们正在做的一个项目中与四川地区有关,所以导出数据就有如下需求: 1.  按各市导出数据, ...