1. install kafkacat

Ubuntu

apt-get install kafkacat

CentOS

install deepenency

yum install librdkafka-devel

download source from github

build source on centos

./configure <usual-configure-options>
make
sudo make install

2. watch the target topic data

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ bin/kafka-console-consumer  --bootstrap-server localhost: --from-beginning --topic connect-offsets    --property print.key=true
["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":}

there is only one record in kafka topic connect-offsets.

3. dump the record from topic

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ kafkacat -b localhost: -t connect-offsets  -C -K# -o-
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":}
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset

the value:

["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":1005}

is what we want!

4. use the value get in step 3 as template to and send to the topic again

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ echo '["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":1}' | \
> kafkacat -b localhost: -t connect-offsets -P -Z -K#

here, we modify the incrementing value from 1005 to 1.

for timestamp+increment

echo '["jdbc_source_inventory_orders",{"query":"query"}]#{"timestamp_nanos":0,"incrementing":0,"timestamp":0}' | \
kafkacat -b localhost: -t connect-offsets -P -Z -K#

5. watch the topic again

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ bin/kafka-console-consumer  --bootstrap-server localhost: --from-beginning --topic connect-offsets    --property print.key=true
["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":}
["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":}

we can see, there are two values with the same key in the topic now.

refernce

https://docs.confluent.io/current/app-development/kafkacat-usage.html

using kafkacat reset kafka offset的更多相关文章

  1. Kafka Offset相关命令总结

    Kafka Offset相关命令总结 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.查询topic的offset的范围 1>.查询某个topic的offset的最小值 [ ...

  2. kafka集群监控工具之三--kafka Offset Monitor

    1.介绍 一般情况下,功能简单的kafka项目  使用运维命令+kafka Offset Monitor 就足够用了. 2.使用2.1 部署 github下载jar包 KafkaOffsetMonit ...

  3. Kafka Offset 1

    Kafka Offset Storage   1.概述 目前,Kafka 官网最新版[0.10.1.1],已默认将消费的 offset 迁入到了 Kafka 一个名为 __consumer_offse ...

  4. Spark createDirectStream 维护 Kafka offset(Scala)

    createDirectStream方式需要自己维护offset,使程序可以实现中断后从中断处继续消费数据. KafkaManager.scala import kafka.common.TopicA ...

  5. Kafka Offset Storage

    1.概述 目前,Kafka 官网最新版[0.10.1.1],已默认将消费的 offset 迁入到了 Kafka 一个名为 __consumer_offsets 的Topic中.其实,早在 0.8.2. ...

  6. kafka offset 设置

    from kafka import KafkaConsumer from kafka import TopicPartition from kafka.structs import OffsetAnd ...

  7. 关于 Kafka offset

    查询topic的offset的范围 用下面命令可以查询到topic:Mytopic broker:SparkMaster:9092的offset的最小值: bin/kafka-run-class.sh ...

  8. kafka offset的存储问题

    注意:从kafka-0.9版本及以后,kafka的消费者组和offset信息就不存zookeeper了,而是存到broker服务器上,所以,如果你为某个消费者指定了一个消费者组名称(group.id) ...

  9. kafka offset存储

    存储方式 方式 方式来源 存储位置 自动提交 kafka kafka 异步提交 kafka kafka checkpoint spark streaming hdfs hbase存储 程序开发 hba ...

随机推荐

  1. 0018SpringBoot连接docker中的mysql并使用druid数据源

    由于druid数据源自带监控功能,所以引用druid数据源 1.centos7中安装并启动docker 2.docker安装并启动mysql 3.pom.xml中引入druid依赖 4.applica ...

  2. mysql安装笔记

    MySQL-mysql 8.0.11安装教程 - Laumians - 博客园  https://www.cnblogs.com/laumians-notes/p/9069498.html mysql ...

  3. 学习路线 (转自 https://mp.weixin.qq.com/s/_FIGSda6wWL-5LXMQAk3IA )

  4. 2019杭电多校第七场 HDU - 6656 Kejin Player——概率&&期望

    题意 总共有 $n$ 层楼,在第 $i$ 层花费 $a_i$ 的代价,有 $pi$ 的概率到 $i+1$ 层,否则到 $x_i$($x_i \leq 1$) 层.接下来有 $q$ 次询问,每次询问 $ ...

  5. PHP读取文件内容的方法

    下面我们就为大家详细介绍PHP读取文件内容的两种方法. 第一种方法:fread函数 <?php $file=fopen('1.txt','rb+'); echo fread($file,file ...

  6. sql server 将某一列的值拼成一个字符串 赋值到一个字段内

    DECLARE @refCodeitems VARCHAR(800),   SELECT @refCodeitems=ISNULL(@refCodeitems,'')+refCodeitem +'/' ...

  7. Greenplum 调优--查看子节点SQL运行状态

    摘自<Greenplum企业应用实战> 重点: 使用gp_dist_random函数,将查询下发到每个Segement 创建查看子节点SQL运行状态视图 1)创建v_active_sql视 ...

  8. 070_Shell 脚本对信号的处理,执行脚本后,按键盘 Ctrl+C 无法终止的脚本

    #!/bin/bash#使用 trap 命令可以拦截用户通过键盘或 kill 命令发送过来的信号#使用 kill -l 可以查看 Linux 系统中所有的信号列表,其中 2 代表 Ctrl+C#tra ...

  9. TensorFlow(四):手写数字识别

    一:数据集 采用MNIST数据集:-->官网 数据集被分成两部分:60000行的训练数据集和10000行的测试数据集. 其中每一张图片包含28*28个像素,我们把这个数组展开成一个向量,长度为2 ...

  10. 第90节:Java中的Linux基础

    第90节:Java中的Linux基础 linux是装载虚拟机上面的: JDK依赖包: yum install glibc.i686 MYSQL依赖包: yum -y install libaio.so ...