1. install kafkacat

Ubuntu

apt-get install kafkacat

CentOS

install deepenency

yum install librdkafka-devel

download source from github

build source on centos

./configure <usual-configure-options>
make
sudo make install

2. watch the target topic data

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ bin/kafka-console-consumer  --bootstrap-server localhost: --from-beginning --topic connect-offsets    --property print.key=true
["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":}

there is only one record in kafka topic connect-offsets.

3. dump the record from topic

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ kafkacat -b localhost: -t connect-offsets  -C -K# -o-
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":}
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset

the value:

["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":1005}

is what we want!

4. use the value get in step 3 as template to and send to the topic again

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ echo '["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":1}' | \
> kafkacat -b localhost: -t connect-offsets -P -Z -K#

here, we modify the incrementing value from 1005 to 1.

for timestamp+increment

echo '["jdbc_source_inventory_orders",{"query":"query"}]#{"timestamp_nanos":0,"incrementing":0,"timestamp":0}' | \
kafkacat -b localhost: -t connect-offsets -P -Z -K#

5. watch the topic again

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ bin/kafka-console-consumer  --bootstrap-server localhost: --from-beginning --topic connect-offsets    --property print.key=true
["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":}
["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":}

we can see, there are two values with the same key in the topic now.

refernce

https://docs.confluent.io/current/app-development/kafkacat-usage.html

using kafkacat reset kafka offset的更多相关文章

  1. Kafka Offset相关命令总结

    Kafka Offset相关命令总结 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.查询topic的offset的范围 1>.查询某个topic的offset的最小值 [ ...

  2. kafka集群监控工具之三--kafka Offset Monitor

    1.介绍 一般情况下,功能简单的kafka项目  使用运维命令+kafka Offset Monitor 就足够用了. 2.使用2.1 部署 github下载jar包 KafkaOffsetMonit ...

  3. Kafka Offset 1

    Kafka Offset Storage   1.概述 目前,Kafka 官网最新版[0.10.1.1],已默认将消费的 offset 迁入到了 Kafka 一个名为 __consumer_offse ...

  4. Spark createDirectStream 维护 Kafka offset(Scala)

    createDirectStream方式需要自己维护offset,使程序可以实现中断后从中断处继续消费数据. KafkaManager.scala import kafka.common.TopicA ...

  5. Kafka Offset Storage

    1.概述 目前,Kafka 官网最新版[0.10.1.1],已默认将消费的 offset 迁入到了 Kafka 一个名为 __consumer_offsets 的Topic中.其实,早在 0.8.2. ...

  6. kafka offset 设置

    from kafka import KafkaConsumer from kafka import TopicPartition from kafka.structs import OffsetAnd ...

  7. 关于 Kafka offset

    查询topic的offset的范围 用下面命令可以查询到topic:Mytopic broker:SparkMaster:9092的offset的最小值: bin/kafka-run-class.sh ...

  8. kafka offset的存储问题

    注意:从kafka-0.9版本及以后,kafka的消费者组和offset信息就不存zookeeper了,而是存到broker服务器上,所以,如果你为某个消费者指定了一个消费者组名称(group.id) ...

  9. kafka offset存储

    存储方式 方式 方式来源 存储位置 自动提交 kafka kafka 异步提交 kafka kafka checkpoint spark streaming hdfs hbase存储 程序开发 hba ...

随机推荐

  1. GO 文件读取常用的方法

    方式1: 一行一行的方式读取 其中常用的方法就有:ReadString,ReadLine,ReadBytes ReadLine 返回单个行,不包括行尾字节,就是说,返回的内容不包括\n或者\r\n,返 ...

  2. python - scrapy 爬虫框架 ( 起始url的实现,深度和优先级,下载中间件 )

    1.  start_urls  --  起始URL 的内部实现(将迭代器转换为生成器) class QSpider(scrapy.Spider): name = 'q' allowed_domains ...

  3. 45、[源码]-Spring容器创建-执行BeanFactoryPostProcessor

    45.[源码]-Spring容器创建-执行BeanFactoryPostProcessor 5.invokeBeanFactoryPostProcessors(beanFactory);执行BeanF ...

  4. PHP 根据域名和IP返回不同的内容

    遇到一个好玩的事情,访问别人的IP和别人的域名返回的内容竟然不一样.突然觉得很好玩,也很好奇.自己研究了一下下,就简单写一下吧~ 一个IP和一个域名, 先讲一下公网IP没有绑定域名,但是可以通过一个没 ...

  5. OFDM为什么把高频子载波作为保护频带

    实际中发射机接收机的低通滤波器并不是理想低通滤波器,在[-W/2,W/2]之外的一个小范围(对应使用旁边的频带的用户的高频)之内也会有一些不可忽略的能量:并且,实际低通滤波器在高频子载波上的幅度也会比 ...

  6. 学到了林海峰,武沛齐讲的Day22-完 os sys json pickle shelve XML re

    __ file__    =====   文件路径 os.path.dirname( 路径 )=======到上一层目录 os sys

  7. Greenplum 表空间和filespace的用法

    转载:https://yq.aliyun.com/articles/190 Greenplum支持表空间,创建表空间时,需要指定filespace.postgres=# \h create table ...

  8. CSS3 backface-visibility 不面向屏幕是否可见

    backface-visibility 属性定义当元素不面向屏幕时是否可见. 如果在旋转元素不希望看到其背面时,该属性很有用. backface-visibility: visible|hidden; ...

  9. 第12章、乐活人生的ABCDE

    目录 第12章.乐活人生的ABCDE 什么时候该乐观 让自己乐观的ABC 确认ABC 你的ABC记录 反驳和转移注意 转移注意 反驳 保持距离 学习与自己争辩 证据 其他可能性 暗示 用处 你的反驳记 ...

  10. s-w-i-p-e-r做一个-老-唬-机-抽-蒋

    <template> <div class="selfLotteryBox"> <div class="row"> < ...