1. install kafkacat

Ubuntu

apt-get install kafkacat

CentOS

install deepenency

yum install librdkafka-devel

download source from github

build source on centos

./configure <usual-configure-options>
make
sudo make install

2. watch the target topic data

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ bin/kafka-console-consumer  --bootstrap-server localhost: --from-beginning --topic connect-offsets    --property print.key=true
["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":}

there is only one record in kafka topic connect-offsets.

3. dump the record from topic

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ kafkacat -b localhost: -t connect-offsets  -C -K# -o-
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":}
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset

the value:

["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":1005}

is what we want!

4. use the value get in step 3 as template to and send to the topic again

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ echo '["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":1}' | \
> kafkacat -b localhost: -t connect-offsets -P -Z -K#

here, we modify the incrementing value from 1005 to 1.

for timestamp+increment

echo '["jdbc_source_inventory_orders",{"query":"query"}]#{"timestamp_nanos":0,"incrementing":0,"timestamp":0}' | \
kafkacat -b localhost: -t connect-offsets -P -Z -K#

5. watch the topic again

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ bin/kafka-console-consumer  --bootstrap-server localhost: --from-beginning --topic connect-offsets    --property print.key=true
["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":}
["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":}

we can see, there are two values with the same key in the topic now.

refernce

https://docs.confluent.io/current/app-development/kafkacat-usage.html

using kafkacat reset kafka offset的更多相关文章

  1. Kafka Offset相关命令总结

    Kafka Offset相关命令总结 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.查询topic的offset的范围 1>.查询某个topic的offset的最小值 [ ...

  2. kafka集群监控工具之三--kafka Offset Monitor

    1.介绍 一般情况下,功能简单的kafka项目  使用运维命令+kafka Offset Monitor 就足够用了. 2.使用2.1 部署 github下载jar包 KafkaOffsetMonit ...

  3. Kafka Offset 1

    Kafka Offset Storage   1.概述 目前,Kafka 官网最新版[0.10.1.1],已默认将消费的 offset 迁入到了 Kafka 一个名为 __consumer_offse ...

  4. Spark createDirectStream 维护 Kafka offset(Scala)

    createDirectStream方式需要自己维护offset,使程序可以实现中断后从中断处继续消费数据. KafkaManager.scala import kafka.common.TopicA ...

  5. Kafka Offset Storage

    1.概述 目前,Kafka 官网最新版[0.10.1.1],已默认将消费的 offset 迁入到了 Kafka 一个名为 __consumer_offsets 的Topic中.其实,早在 0.8.2. ...

  6. kafka offset 设置

    from kafka import KafkaConsumer from kafka import TopicPartition from kafka.structs import OffsetAnd ...

  7. 关于 Kafka offset

    查询topic的offset的范围 用下面命令可以查询到topic:Mytopic broker:SparkMaster:9092的offset的最小值: bin/kafka-run-class.sh ...

  8. kafka offset的存储问题

    注意:从kafka-0.9版本及以后,kafka的消费者组和offset信息就不存zookeeper了,而是存到broker服务器上,所以,如果你为某个消费者指定了一个消费者组名称(group.id) ...

  9. kafka offset存储

    存储方式 方式 方式来源 存储位置 自动提交 kafka kafka 异步提交 kafka kafka checkpoint spark streaming hdfs hbase存储 程序开发 hba ...

随机推荐

  1. vue2 路由,运动

  2. pygame无法自动补全解决方法

    在pycharm中导入pygame 1.  如果出现 AttributeError: module 'pip' has no attribute 'main'问题 找到安装目录下 helpers/pa ...

  3. SMT32 启动文件详细解说

    在开发STM32的时候,无论你试试用库开发还是使用寄存器来开发首先最重要的你必须的理解STM32的启动流程,启动流程封装在启动文件里面.而这个启动文件就是Bootloader.Cortex M3的内核 ...

  4. 31、[源码]-AOP原理-AnnotationAwareAspectJAutoProxyCreato机

    31.[源码]-AOP原理-AnnotationAwareAspectJAutoProxyCreato机

  5. SublimeText 括号插件 Bracket Highlighter高亮设置

    1. ctrl + shift + p,打开命令面板,输入install,在菜单中选择Package Control:Install Package如图 2. 步骤1后弹出的命令输入框中 输入:Bra ...

  6. sql server update inner join on 的使用

    假定我们有两张表,一张表为Product表存放产品信息,其中有产品价格列Price:另外一张表是ProductPrice表,我们要将ProductPrice表中的价格字段Price更新为Price表中 ...

  7. spring boot 实现多个 interceptor 并指定顺序

    首先我们创建Interceptor,实现HandlerInterceptor覆写方法:一.下面我创建了三个拦截器:MyInterceptor,UserInterceptor,StudentInterc ...

  8. LOJ P10149 凸多边形的划分 题解

    Analysis 区间dp+压位高精 dp五分钟,高精两小时 #include<iostream> #include<cstdio> #include<cstring&g ...

  9. Python2.7学习

    网上很多代码都不适用于python3版本,所以还是转回版本2来学习了 install 安装模块特别简单 E:\01_SOFT\Python27\python  -m easy_install sunb ...

  10. NetworkX系列教程(4)-设置graph的信息

    小书匠Graph图论 要画出美观的graph,需要对graph里面的节点,边,节点的布局都要进行设置,具体可以看官方文档:Adding attributes to graphs, nodes, and ...