使用kafka-clients操作kafka始终不成功,原因不清楚,下面贴出相关代码及配置,请懂得指点一下,谢谢!

环境及依赖

<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.2.0</version>
</dependency>

JDK版本为1.8、Kafka版本为2.12-0.10.2.0,服务器使用CentOS-7构建。

测试代码

  • TestBase.java

public class TestBase {

    protected Logger log = LoggerFactory.getLogger(this.getClass());

    protected String kafka_server = "192.168.60.160:9092" ;

    protected String topic = "zlikun_topic";

}
  • ProducerTest.java

public class ProducerTest extends TestBase {

    protected Properties props = new Properties();

    @Before
public void init() { props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka_server);
props.put(ProducerConfig.ACKS_CONFIG, "all");
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class); props.put(ProducerConfig.PARTITIONER_CLASS_CONFIG ,MyPartitioner.class) ;
} @Test
public void test() throws InterruptedException { KafkaProducer<String, String> producer = new KafkaProducer<>(props); // 发送消息
for (int i = 0; i < 10; i++) {
producer.send(new ProducerRecord<String, String>(topic, Integer.toString(i), Integer.toString(i)), new Callback() {
@Override
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
if (e == null) {
System.out.printf("offset = %d ,partition = %d \n", recordMetadata.offset() ,recordMetadata.partition());
} else {
log.error("send error !" ,e);
}
}
});
} TimeUnit.SECONDS.sleep(3);
producer.close(); } }
  • ConsumerTest.java

public class ConsumerTest extends TestBase {

    private Properties props = new Properties();

    @Before
public void init() {
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka_server);
props.put(ConsumerConfig.GROUP_ID_CONFIG ,"zlikun") ;
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
} @Test
public void test() { Consumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList(topic));
// consumer.assign(Arrays.asList(new TopicPartition(topic, 1))); while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records) {
System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
}
} } }

问题

# 测试topic为手动创建
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 4 --topic zlikun_topic

控制台输出信息

[kafka-producer-network-thread | producer-1] ERROR com.zlikun.mq.ProducerTest - send error !
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for zlikun_topic-3: 30042 ms has passed since batch creation plus linger time
[kafka-producer-network-thread | producer-1] ERROR com.zlikun.mq.ProducerTest - send error !
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for zlikun_topic-3: 30042 ms has passed since batch creation plus linger time
[kafka-producer-network-thread | producer-1] ERROR com.zlikun.mq.ProducerTest - send error !
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for zlikun_topic-2: 30042 ms has passed since batch creation plus linger time
[kafka-producer-network-thread | producer-1] ERROR com.zlikun.mq.ProducerTest - send error !
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for zlikun_topic-2: 30042 ms has passed since batch creation plus linger time
[kafka-producer-network-thread | producer-1] ERROR com.zlikun.mq.ProducerTest - send error !
org.apache.kafka.common.errors.TimeoutException: Expiring 3 record(s) for zlikun_topic-1: 30043 ms has passed since batch creation plus linger time
[kafka-producer-network-thread | producer-1] ERROR com.zlikun.mq.ProducerTest - send error !
org.apache.kafka.common.errors.TimeoutException: Expiring 3 record(s) for zlikun_topic-1: 30043 ms has passed since batch creation plus linger time
[kafka-producer-network-thread | producer-1] ERROR com.zlikun.mq.ProducerTest - send error !
org.apache.kafka.common.errors.TimeoutException: Expiring 3 record(s) for zlikun_topic-1: 30043 ms has passed since batch creation plus linger time
[kafka-producer-network-thread | producer-1] ERROR com.zlikun.mq.ProducerTest - send error !
org.apache.kafka.common.errors.TimeoutException: Expiring 3 record(s) for zlikun_topic-0: 30046 ms has passed since batch creation plus linger time
[kafka-producer-network-thread | producer-1] ERROR com.zlikun.mq.ProducerTest - send error !
org.apache.kafka.common.errors.TimeoutException: Expiring 3 record(s) for zlikun_topic-0: 30046 ms has passed since batch creation plus linger time
[kafka-producer-network-thread | producer-1] ERROR com.zlikun.mq.ProducerTest - send error !
org.apache.kafka.common.errors.TimeoutException: Expiring 3 record(s) for zlikun_topic-0: 30046 ms has passed since batch creation plus linger time

Java操作Kafka执行不成功 >> java

这个答案描述的挺清楚的:
http://www.goodpm.net/postreply/java/1010000008863969/Java操作Kafka执行不成功.html

Java操作Kafka执行不成功的更多相关文章

  1. Java操作Kafka执行不成功的解决方法,Kafka Broker Advertised.Listeners属性的设置

    创建Spring Boot项目继承Kafka,向Kafka发送消息始终不成功.具体项目配置如下: <?xml version="1.0" encoding="UTF ...

  2. Java操作Kafka

    java操作kafka非常的简单,然后kafka也提供了很多缺省值,一般情况下我们不需要修改太多的参数就能使用.下面我贴出代码. pom.xml <dependency> <grou ...

  3. 在java.ext.dirs中使用环境变量导致crontab执行不成功的问题及解决

    在java.ext.dirs中使用环境变量导致crontab执行不成功的问题及解决 Table of Contents 1. java.ext.dirs的使用和环境变量 2. 问题:在crontab中 ...

  4. kafka实战教程(python操作kafka),kafka配置文件详解

    kafka实战教程(python操作kafka),kafka配置文件详解 应用往Kafka写数据的原因有很多:用户行为分析.日志存储.异步通信等.多样化的使用场景带来了多样化的需求:消息是否能丢失?是 ...

  5. 【MongoDB for Java】Java操作MongoDB

    上一篇文章: http://www.cnblogs.com/hoojo/archive/2011/06/01/2066426.html介绍到了在MongoDB的控制台完成MongoDB的数据操作,通过 ...

  6. Java操作Oracle

    public class DBCon { // 数据库驱动对象 public static final String DRIVER = "oracle.jdbc.driver.OracleD ...

  7. JAVA操作ORACLE数据库的存储过程

    一.任务提出 JAVA操作oracle11g存储过程实验需要完成以下几个实例: 1.调用没有返回参数的过程(插入记录.更新记录) 2.有返回参数的过程 3.返回列表的过程 4.返回带分页的列表的过程. ...

  8. JAVA操作MongoDB数据库

    1. 首先,下载MongoDB对Java支持的驱动包 驱动包下载地址:https://github.com/mongodb/mongo-java-driver/downloads 2.Java操作Mo ...

  9. Java JDBC下执行SQL的不同方式、参数化预编译防御

    相关学习资料 http://zh.wikipedia.org/wiki/Java数据库连接 http://lavasoft.blog.51cto.com/62575/20588 http://blog ...

随机推荐

  1. javaee 文件的读取

    package Shurushucu; import java.io.FileNotFoundException; import java.io.FileOutputStream; import ja ...

  2. Vue学习之路第十二篇:为页面元素设置内联样式

    1.有了上一篇的基础,接下来理解内联样式的设置会更简单一点,先看正常的css内联样式: <dvi id="app"> <p style="font-si ...

  3. HDU 1164 Eddy's research I( 试除法 & 筛法改造试除法 分解整数 )

    链接:传送门 题意:给出一个整数 n ,输出整数 n 的分解成若干个素因子的方案 思路:经典的整数分解题目,这里采用试除法 和 用筛法改造后的试除法 对正整数 n 进行分解 方法一:试除法对正整数 n ...

  4. NOI 2016 循环之美 (莫比乌斯反演+杜教筛)

    题目大意:略 洛谷传送门 鉴于洛谷最近总崩,附上良心LOJ链接 任何形容词也不够赞美这一道神题 $\sum\limits_{i=1}^{N}\sum\limits_{j=1}^{M}[gcd(i,j) ...

  5. 30分钟精通React今年最劲爆的新特性——React Hooks

    你还在为该使用无状态组件(Function)还是有状态组件(Class)而烦恼吗? --拥有了hooks,你再也不需要写Class了,你的所有组件都将是Function. 你还在为搞不清使用哪个生命周 ...

  6. SSH框架整合截图总结(一)

    分页相关属性 --------------------------------------------------------------- 分页思路表单提交(只需传递当前页的值) ->acti ...

  7. HDU 1475 Pushing Boxes

    Pushing Boxes Time Limit: 2000ms Memory Limit: 131072KB This problem will be judged on PKU. Original ...

  8. sw算法求最小割学习

    http://  blog.sina.com.cn/s/blog_700906660100v7vb.html 转载:http://www.cnblogs.com/ylfdrib/archive/201 ...

  9. Elasticsearch 7.0 正式发布,盘他!

    Elastic{ON}北京分享了Elasticsearch7.0在Speed,Scale,Relevance等方面的很多新特性. 比快更快,有传说中的那么牛逼吗?盘他! 通过本文,你能了解到: Ela ...

  10. 利用redis实现elasticsearch入库去重

    背景 公司有一个业务场景,数据库的修改需要同步到Elasticsearch里,但是该场景的修改频率有点高,经常会出现一条记录短时间内多次的变化,如果每次变化都作为一次ES同步任务,那ES肯定是受不住的 ...