1. 基本信息介绍


基于spring的kafka应用,非常简单即可搭建起来,前提是要有一个kafka的broker集群。我在之前的博文里面已经介绍并搭建了一套broker环境,参考Kafka研究【一】:bring up环境

另外,要注意的是kafka基于spring框架构建应用,需要注意版本信息,下面是官方要求:

Apache Kafka Clients 1.0.0
Spring Framework 5.0.x
Minimum Java version: 8

我这里要介绍的应用案例,是基于springboot构建的,所以,版本信息,可能不是严格按照上述的要求来的,但是整体还是满足版本兼容要求。

2. 搭建基于springboot的kafka应用

2.1 首先在IDEA里面构建一个maven项目

配置好pom.xml,整个项目的pom.xml如下:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.</modelVersion> <groupId>com.roomdis</groupId>
<artifactId>kafka</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging> <name>kafka</name>
<description>kafka project with Spring Boot</description> <parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5..RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent> <properties>
<project.build.sourceEncoding>UTF-</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-</project.reporting.outputEncoding>
<java.version>1.8</java.version>
</properties> <dependencies>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-freemarker</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
<exclusion>
<artifactId>log4j-over-slf4j</artifactId>
<groupId>org.slf4j</groupId>
</exclusion>
</exclusions>
</dependency>
<!-- https://mvnrepository.com/artifact/com.google.code.gson/gson -->
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency> <!-- 添加log4j的依赖 -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-log4j</artifactId>
</dependency>
</dependencies> <build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build> </project>

接下来,就是构建具体的消息生产者和消息消费者。这里,我们的topic是固定的,partition也是默认的1个,这里主要是介绍如何构建一个spring框架下的kafka应用,至于如何动态构建topic,下一个博文介绍深入内容。这里,介绍一个基本的消息发送和介绍流程,发送采用异步(async)的方式,接收消息的模块,采用了应用层面控制消费确认,一般来说,生产级别的kafka应用,消息的消费确认都是会选择应用层面控制确认逻辑,保障消息的安全处理,既不出现消息丢失,也不出现重复消费的问题

2.2 工程配置

这里,我采用的是YAML格式的配置文件,这个也非常简单,其实和properties的配置相比,还简单明了。具体配置如下:

server:
port:
contextPath : /kafka
spring:
application:
name: kafka
kafka:
bootstrapServers: 10.90.7.2:9092,10.90.2.101:9092,10.90.2.102:9092
consumer:
groupId: kefu-logger
enable-auto-commit: false
keyDeserializer: org.apache.kafka.common.serialization.StringDeserializer
valueDserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
groupId: kefu-logger
retries:
buffer-memory:
keyDeserializer: org.apache.kafka.common.serialization.StringSerializer
valueDserializer: org.apache.kafka.common.serialization.StringSerializer
listener:
ack
-mode: MANUAL_IMMEDIATE

这里重点说下几点:

A. 应用端口是8899,工程对外项目名称是kafka,即URL里面的头部是/kafka.

B. 另外,消息生产和消费的序列化工具都是指定的String的。

C. 消费者和生产者都在指定的组groupId为kefu-logger.注意,这里的groupId,其实是为了提高消息的消费能力做的特别处理,即同一个groupId的消费者,可以负载均衡的将partition组里面的消息消费掉。

D. 还有一点,很重要的就是监听器的ackMode的配置,这里,指定为MANUAL_IMMEDIATE,意思就是手动立即确认,这个必须要求消费者配置enable-auto-commit为false,同时,消息消费的逻辑里面,要有相应的逻辑对消费的消息进行acknowledge操作,否则,下次消费者启动后,将会再次消费这些offset对应的消息记录,导致重复消费

2.3 消息实例定义

这里,主要是考虑后续的日志集中接管处理,所以,DTO就是以日志消息维度定义的。主要有如下内容:

public class LogMessage {
/*
*服务类型,例如:IMS,BI等
*/
private String serviceType;
/*
*服务器地址,IP:PORT,例如:10.130.207.221:8080
*/
private String serverAddr;
/*
*日志产生的具体程序全路径
*/
private String fullClassPath;
/*
*消息产生的时间
*/
private String messageTime;
/*
*消息的具体内容。这个很重要,是json的字符串。兼容不同服务的消息格式。
*/
private String content;
/*
*日志的级别,主要有INFO,WARN,ERROR,DEBUG等
*/
private String level; public String getServiceType() {
return serviceType;
} public void setServiceType(String serviceType) {
this.serviceType = serviceType;
} public String getServerAddr() {
return serverAddr;
} public void setServerAddr(String serverAddr) {
this.serverAddr = serverAddr;
} public String getFullClassPath() {
return fullClassPath;
} public void setFullClassPath(String fullClassPath) {
this.fullClassPath = fullClassPath;
} public String getMessageTime() {
return messageTime;
} public void setMessageTime(String messageTime) {
this.messageTime = messageTime;
} public String getContent() {
return content;
} public void setContent(String content) {
this.content = content;
} public String getLevel() {
return level;
} public void setLevel(String level) {
this.level = level;
}
}

当然,这里的DTO里面,其实可以采用注解的方式实现setter和getter以及toString等基本函数的实现,为了方便说明问题,我这里就不要lomback注解包的功能。

2.4 消息生产者

这里重点关注消息的异步生产过程,即消息投递到broker的过程是异步的,这个是非常有价值的,对于并发性提升。

@Service
public class MessageProducer {
private Logger logger = Logger.getLogger(MessageProducer.class); @Autowired
private KafkaTemplate kafkaTemplate; private Gson gson = new GsonBuilder().create(); public void send(LogMessage logMessage) {
String msg = gson.toJson(logMessage);
//下面采取的是异步的方式完成消息的发送,发送成功或者失败,都有回调函数进行后续逻辑处理,非常方便
ListenableFuture<SendResult<String, String>> future = kafkaTemplate.send(Config.TOPIC, msg);
future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
@Override
public void onSuccess(SendResult<String, String> stringStringSendResult) {
long offset = stringStringSendResult.getRecordMetadata().offset();
String cont = stringStringSendResult.getProducerRecord().toString();
logger.info("cont: " + cont + ", offset: " + offset);
} @Override
public void onFailure(Throwable throwable) {
logger.error(throwable.getMessage());
}
});
}
}

2.5 消息消费者

下面的消费者逻辑中,OnMessage的入参中,必须要有Acknowledgment参数,否则没有办法完成MANUAL的所谓应用层面的消息消费确认。

@Service
public class MessageConsumer { private Logger logger = Logger.getLogger(MessageConsumer.class); @KafkaListener(topics = Config.TOPIC)
public void onMessage(@Payload String msg, Acknowledgment ack){
logger.info(msg);
// long offset = record.offset();
// long partition = record.partition();
// String content = record.value();
// logger.info("offset: " + offset + ", partition: " + partition + ", content: " + content);
ack.acknowledge();
} @KafkaListener(topics = Config.TOPIC)
public void onMessage(ConsumerRecord<String, String> record, Acknowledgment ack){
logger.info(record);
long offset = record.offset();
long partition = record.partition();
String content = record.value();
logger.info("offset: " + offset + ", partition: " + partition + ", payload: " + content);
//手动确认消息已经被消费,这个很重要,灵活控制,保证消息消费确认的问题。
ack.acknowledge();
}
}

3. 程序运行验证

这里,主要是验证消息消费后,执行了ack.acknowledge()和不执行ack.acknowledge()的区别,深刻理解不确认会导致重复消费的问题。

3.1 执行acknowledge

效果是程序启动后offset的值会接着上次递增,对应的消息内容payload也是不同的。这个就不给出日志内容了。

3.2 不执行acknowledge
为了对比,给出一段停应用前的日志:

  .   ____          _            __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.5.4.RELEASE) -- ::06.181 INFO --- [ main] c.roomdis.micros.kafka.KafkaApplication : Starting KafkaApplication on -- with PID (D:\Knowledge\SOURCE\springboot-kafka\target\classes started by chengsh05 in D:\Knowledge\SOURCE\springboot-kafka)
-- ::06.184 INFO --- [ main] c.roomdis.micros.kafka.KafkaApplication : No active profile set, falling back to default profiles: default
-- ::06.236 INFO --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@48fa0f47: startup date [Wed Aug :: CST ]; root of context hierarchy
-- ::07.194 INFO --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.kafka.annotation.KafkaBootstrapConfiguration' of type [org.springframework.kafka.annotation.KafkaBootstrapConfiguration$$EnhancerBySpringCGLIB$$2d472f92] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
-- ::07.655 INFO --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): (http)
-- ::07.672 INFO --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
-- ::07.673 INFO --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.
-- ::07.786 INFO --- [ost-startStop-] o.a.c.c.C.[Tomcat].[localhost].[/kafka] : Initializing Spring embedded WebApplicationContext
-- ::07.786 INFO --- [ost-startStop-] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in ms
-- ::07.942 INFO --- [ost-startStop-] o.s.b.w.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/]
-- ::07.947 INFO --- [ost-startStop-] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
2018-08-01 19:45:07.947 INFO 14264 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2018-08-01 19:45:07.947 INFO 14264 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*]
2018-08-01 19:45:07.947 INFO 14264 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*]
2018-08-01 19:45:08.354 INFO 14264 --- [ main] s.w.s.m.m.a.RequestMappingHandlerAdapter : Looking for @ControllerAdvice: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@48fa0f47: startup date [Wed Aug 01 19:45:06 CST 2018]; root of context hierarchy
2018-08-01 19:45:08.419 INFO 14264 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest)
2018-08-01 19:45:08.420 INFO 14264 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)
2018-08-01 19:45:08.448 INFO 14264 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-08-01 19:45:08.448 INFO 14264 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-08-01 19:45:08.478 INFO 14264 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
-- ::08.623 INFO --- [ main] o.s.ui.freemarker.SpringTemplateLoader : SpringTemplateLoader for FreeMarker: using resource loader [org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@48fa0f47: startup date [Wed Aug :: CST ]; root of context hierarchy] and template loader path [classpath:/templates/]
-- ::08.624 INFO --- [ main] o.s.w.s.v.f.FreeMarkerConfigurer : ClassTemplateLoader for Spring macros added to FreeMarker configuration
-- ::08.644 WARN --- [ main] o.s.b.a.f.FreeMarkerAutoConfiguration : Cannot find template location(s): [classpath:/templates/] (please add some templates, check your FreeMarker configuration, or set spring.freemarker.checkTemplateLocation=false)
-- ::08.717 INFO --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
-- ::08.734 INFO --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase
-- ::08.748 INFO --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms =
auto.offset.reset = latest
bootstrap.servers = [10.90.7.2:, 10.90.2.101:, 10.90.2.102:]
check.crcs = true
client.id =
connections.max.idle.ms =
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes =
fetch.max.wait.ms =
fetch.min.bytes =
group.id = kefu-logger
heartbeat.interval.ms =
interceptor.classes = null
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes =
max.poll.interval.ms =
max.poll.records =
metadata.max.age.ms =
metric.reporters = []
metrics.num.samples =
metrics.sample.window.ms =
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes =
reconnect.backoff.ms =
request.timeout.ms =
retry.backoff.ms =
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin =
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes =
session.timeout.ms =
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1., TLSv1., TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer -- ::08.751 INFO --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms =
auto.offset.reset = latest
bootstrap.servers = [10.90.7.2:, 10.90.2.101:, 10.90.2.102:]
check.crcs = true
client.id = consumer-
connections.max.idle.ms =
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes =
fetch.max.wait.ms =
fetch.min.bytes =
group.id = kefu-logger
heartbeat.interval.ms =
interceptor.classes = null
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes =
max.poll.interval.ms =
max.poll.records =
metadata.max.age.ms =
metric.reporters = []
metrics.num.samples =
metrics.sample.window.ms =
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes =
reconnect.backoff.ms =
request.timeout.ms =
retry.backoff.ms =
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin =
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes =
session.timeout.ms =
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1., TLSv1., TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer -- ::08.796 INFO --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.10.1.1
-- ::08.797 INFO --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : f10ef2720b03b247
-- ::08.841 INFO --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): (http)
-- ::08.848 INFO --- [ main] c.roomdis.micros.kafka.KafkaApplication : Started KafkaApplication in 3.079 seconds (JVM running for 3.484)
-- ::08.859 INFO --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks =
batch.size =
block.on.buffer.full = false
bootstrap.servers = [10.90.7.2:, 10.90.2.101:, 10.90.2.102:]
buffer.memory =
client.id =
compression.type = none
connections.max.idle.ms =
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms =
max.block.ms =
max.in.flight.requests.per.connection =
max.request.size =
metadata.fetch.timeout.ms =
metadata.max.age.ms =
metric.reporters = []
metrics.num.samples =
metrics.sample.window.ms =
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes =
reconnect.backoff.ms =
request.timeout.ms =
retries =
retry.backoff.ms =
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin =
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes =
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1., TLSv1., TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms =
value.serializer = class org.apache.kafka.common.serialization.StringSerializer -- ::08.859 INFO --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks =
batch.size =
block.on.buffer.full = false
bootstrap.servers = [10.90.7.2:, 10.90.2.101:, 10.90.2.102:]
buffer.memory =
client.id = producer-
compression.type = none
connections.max.idle.ms =
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms =
max.block.ms =
max.in.flight.requests.per.connection =
max.request.size =
metadata.fetch.timeout.ms =
metadata.max.age.ms =
metric.reporters = []
metrics.num.samples =
metrics.sample.window.ms =
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes =
reconnect.backoff.ms =
request.timeout.ms =
retries =
retry.backoff.ms =
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin =
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes =
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1., TLSv1., TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms =
value.serializer = class org.apache.kafka.common.serialization.StringSerializer -- ::08.873 INFO --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.10.1.1
-- ::08.873 INFO --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : f10ef2720b03b247
-- ::08.932 INFO --- [ntainer#--C-] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator 10.90.2.102: (id: rack: null) for group kefu-logger.
-- ::08.936 INFO --- [ntainer#--C-] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [] for group kefu-logger
-- ::08.936 INFO --- [ntainer#--C-] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
-- ::08.936 INFO --- [ntainer#--C-] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group kefu-logger
-- ::08.947 INFO --- [ntainer#--C-] o.a.k.c.c.internals.AbstractCoordinator : Successfully joined group kefu-logger with generation
-- ::08.948 INFO --- [ntainer#--C-] o.a.k.c.c.internals.ConsumerCoordinator : Setting newly assigned partitions [kefuLogger-] for group kefu-logger
-- ::08.958 INFO --- [ntainer#--C-] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[kefuLogger-]
-- ::09.085 INFO --- [ad | producer-] c.r.m.kafka.producer.MessageProducer : cont: ProducerRecord(topic=kefuLogger, partition=null, key=null, value={"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:08 CST 2018","content":"a89aca04-f6d0-4f70-93e9-fd0471165497:Wed Aug 01 19:45:08 CST 2018"}, timestamp=null), offset:
-- ::09.092 INFO --- [ntainer#--L-] c.r.m.kafka.consumer.MessageConsumer : ConsumerRecord(topic = kefuLogger, partition = , offset = , CreateTime = , checksum = , serialized key size = -, serialized value size = , key = null, value = {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:08 CST 2018","content":"a89aca04-f6d0-4f70-93e9-fd0471165497:Wed Aug 01 19:45:08 CST 2018"})
-- ::09.093 INFO --- [ntainer#--L-] c.r.m.kafka.consumer.MessageConsumer : offset: , partition: , payload: {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:08 CST 2018","content":"a89aca04-f6d0-4f70-93e9-fd0471165497:Wed Aug 01 19:45:08 CST 2018"}
-- ::11.080 INFO --- [ad | producer-] c.r.m.kafka.producer.MessageProducer : cont: ProducerRecord(topic=kefuLogger, partition=null, key=null, value={"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:11 CST 2018","content":"f907347d-6582-452e-8bcb-4b4f490e5675:Wed Aug 01 19:45:11 CST 2018"}, timestamp=null), offset:
-- ::11.081 INFO --- [ntainer#--L-] c.r.m.kafka.consumer.MessageConsumer : ConsumerRecord(topic = kefuLogger, partition = , offset = , CreateTime = , checksum = , serialized key size = -, serialized value size = , key = null, value = {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:11 CST 2018","content":"f907347d-6582-452e-8bcb-4b4f490e5675:Wed Aug 01 19:45:11 CST 2018"})
-- ::11.081 INFO --- [ntainer#--L-] c.r.m.kafka.consumer.MessageConsumer : offset: , partition: , payload: {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:11 CST 2018","content":"f907347d-6582-452e-8bcb-4b4f490e5675:Wed Aug 01 19:45:11 CST 2018"}
-- ::13.082 INFO --- [ad | producer-] c.r.m.kafka.producer.MessageProducer : cont: ProducerRecord(topic=kefuLogger, partition=null, key=null, value={"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:13 CST 2018","content":"9ce5ce56-b66e-4952-9053-26a32c2b16de:Wed Aug 01 19:45:13 CST 2018"}, timestamp=null), offset:
-- ::13.083 INFO --- [ntainer#--L-] c.r.m.kafka.consumer.MessageConsumer : ConsumerRecord(topic = kefuLogger, partition = , offset = , CreateTime = , checksum = , serialized key size = -, serialized value size = , key = null, value = {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:13 CST 2018","content":"9ce5ce56-b66e-4952-9053-26a32c2b16de:Wed Aug 01 19:45:13 CST 2018"})
-- ::13.083 INFO --- [ntainer#--L-] c.r.m.kafka.consumer.MessageConsumer : offset: , partition: , payload: {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:13 CST 2018","content":"9ce5ce56-b66e-4952-9053-26a32c2b16de:Wed Aug 01 19:45:13 CST 2018"}
-- ::15.084 INFO --- [ad | producer-] c.r.m.kafka.producer.MessageProducer : cont: ProducerRecord(topic=kefuLogger, partition=null, key=null, value={"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:15 CST 2018","content":"ad188d68-c9d2-49ba-be2a-f33b90a45404:Wed Aug 01 19:45:15 CST 2018"}, timestamp=null), offset:
-- ::15.084 INFO --- [ntainer#--L-] c.r.m.kafka.consumer.MessageConsumer : ConsumerRecord(topic = kefuLogger, partition = , offset = , CreateTime = , checksum = , serialized key size = -, serialized value size = , key = null, value = {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:15 CST 2018","content":"ad188d68-c9d2-49ba-be2a-f33b90a45404:Wed Aug 01 19:45:15 CST 2018"})
-- ::15.084 INFO --- [ntainer#--L-] c.r.m.kafka.consumer.MessageConsumer : offset: , partition: , payload: {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:15 CST 2018","content":"ad188d68-c9d2-49ba-be2a-f33b90a45404:Wed Aug 01 19:45:15 CST 2018"} Process finished with exit code

停应用后,再次启动的日志:

-- ::57.562  INFO  --- [ntainer#--C-] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned:[kefuLogger-]
-- ::57.580 INFO --- [ntainer#--L-] c.r.m.kafka.consumer.MessageConsumer : ConsumerRecord(topic = kefuLogger, partition = , offset = , CreateTime = , checksum = , serialized key size = -, serialized value size = , key = null, value = {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:08 CST 2018","content":"a89aca04-f6d0-4f70-93e9-fd0471165497:Wed Aug 01 19:45:08 CST 2018"})
2018-08-01 19:45:57.580 INFO 8632 --- [ntainer#0-0-L-1] c.r.m.kafka.consumer.MessageConsumer : offset: 112, partition: 0, payload: {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:08 CST 2018","content":"a89aca04-f6d0-4f70-93e9-fd0471165497:Wed Aug 01 19:45:08 CST 2018"}
2018-08-01 19:45:57.580 INFO 8632 --- [ntainer#0-0-L-1] c.r.m.kafka.consumer.MessageConsumer : ConsumerRecord(topic = kefuLogger, partition = 0, offset = 113, CreateTime = 1533123911078, checksum = 843723551, serialized key size = -1, serialized value size = 221, key = null, value = {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:11 CST 2018","content":"f907347d-6582-452e-8bcb-4b4f490e5675:Wed Aug 01 19:45:11 CST 2018"})
2018-08-01 19:45:57.580 INFO 8632 --- [ntainer#0-0-L-1] c.r.m.kafka.consumer.MessageConsumer : offset: 113, partition: 0, payload: {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:11 CST 2018","content":"f907347d-6582-452e-8bcb-4b4f490e5675:Wed Aug 01 19:45:11 CST 2018"}
2018-08-01 19:45:57.580 INFO 8632 --- [ntainer#0-0-L-1] c.r.m.kafka.consumer.MessageConsumer : ConsumerRecord(topic = kefuLogger, partition = 0, offset = 114, CreateTime = 1533123913080, checksum = 2420940286, serialized key size = -1, serialized value size = 221, key = null, value = {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:13 CST 2018","content":"9ce5ce56-b66e-4952-9053-26a32c2b16de:Wed Aug 01 19:45:13 CST 2018"})
2018-08-01 19:45:57.580 INFO 8632 --- [ntainer#0-0-L-1] c.r.m.kafka.consumer.MessageConsumer : offset: 114, partition: 0, payload: {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:13 CST 2018","content":"9ce5ce56-b66e-4952-9053-26a32c2b16de:Wed Aug 01 19:45:13 CST 2018"}
2018-08-01 19:45:57.580 INFO 8632 --- [ntainer#0-0-L-1] c.r.m.kafka.consumer.MessageConsumer : ConsumerRecord(topic = kefuLogger, partition = 0, offset = 115, CreateTime = 1533123915082, checksum = 2206983395, serialized key size = -1, serialized value size = 221, key = null, value = {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:15 CST 2018","content":"ad188d68-c9d2-49ba-be2a-f33b90a45404:Wed Aug 01 19:45:15 CST 2018"})
2018-08-01 19:45:57.580 INFO 8632 --- [ntainer#0-0-L-1] c.r.m.kafka.consumer.MessageConsumer : offset: 115, partition: 0, payload: {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:15 CST 2018","content":"ad188d68-c9d2-49ba-be2a-f33b90a45404:Wed Aug 01 19:45:15 CST 2018"}
-- ::57.738 INFO --- [ntainer#--L-] c.r.m.kafka.consumer.MessageConsumer : ConsumerRecord(topic = kefuLogger, partition = , offset = , CreateTime = , checksum = , serialized key size = -, serialized value size = , key = null, value = {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:57 CST 2018","content":"b22aa35d-2bff-4e9e-9832-56145415b075:Wed Aug 01 19:45:57 CST 2018"})
-- ::57.738 INFO --- [ntainer#--L-] c.r.m.kafka.consumer.MessageConsumer : offset: , partition: , payload: {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:57 CST 2018","content":"b22aa35d-2bff-4e9e-9832-56145415b075:Wed Aug 01 19:45:57 CST 2018"}
-- ::57.738 INFO --- [ad | producer-] c.r.m.kafka.producer.MessageProducer : cont: ProducerRecord(topic=kefuLogger, partition=null, key=null, value={"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:57 CST 2018","content":"b22aa35d-2bff-4e9e-9832-56145415b075:Wed Aug 01 19:45:57 CST 2018"}, timestamp=null), offset:
-- ::59.735 INFO --- [ad | producer-] c.r.m.kafka.producer.MessageProducer : cont: ProducerRecord(topic=kefuLogger, partition=null, key=null, value={"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:59 CST 2018","content":"b9a9148b-d0d6-49c5-ac2a-8cfac03dad90:Wed Aug 01 19:45:59 CST 2018"}, timestamp=null), offset:
-- ::59.736 INFO --- [ntainer#--L-] c.r.m.kafka.consumer.MessageConsumer : ConsumerRecord(topic = kefuLogger, partition = , offset = , CreateTime = , checksum = , serialized key size = -, serialized value size = , key = null, value = {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:59 CST 2018","content":"b9a9148b-d0d6-49c5-ac2a-8cfac03dad90:Wed Aug 01 19:45:59 CST 2018"})
-- ::59.736 INFO --- [ntainer#--L-] c.r.m.kafka.consumer.MessageConsumer : offset: , partition: , payload: {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:59 CST 2018","content":"b9a9148b-d0d6-49c5-ac2a-8cfac03dad90:Wed Aug 01 19:45:59 CST 2018"}
-- ::01.736 INFO --- [ad | producer-] c.r.m.kafka.producer.MessageProducer : cont: ProducerRecord(topic=kefuLogger, partition=null, key=null, value={"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:46:01 CST 2018","content":"3cbd6443-9617-4ac3-8985-f0b494187f0a:Wed Aug 01 19:46:01 CST 2018"}, timestamp=null), offset:
-- ::01.736 INFO --- [ntainer#--L-] c.r.m.kafka.consumer.MessageConsumer : ConsumerRecord(topic = kefuLogger, partition = , offset = , CreateTime = , checksum = , serialized key size = -, serialized value size = , key = null, value = {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:46:01 CST 2018","content":"3cbd6443-9617-4ac3-8985-f0b494187f0a:Wed Aug 01 19:46:01 CST 2018"})
-- ::01.736 INFO --- [ntainer#--L-] c.r.m.kafka.consumer.MessageConsumer : offset: , partition: , payload: {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:46:01 CST 2018","content":"3cbd6443-9617-4ac3-8985-f0b494187f0a:Wed Aug 01 19:46:01 CST 2018"} Process finished with exit code

上述红色部分,明显是在应用重启之前就已经显示消费国的内容,也就是说,enable-auto-commit为false的时候,acknowledge必须应用程序执行确认,否则出现了重复消费

 4. 遇到问题

主要是实现应用层面进行消费确认过程中,遇到的,这里,要注意一点,就是enable-auto-commit设置为true是默认行为,为了应用层面控制确认消费,必须将enable-auto-commit设置为false,同时,ack-mode必须设置为MANUAL或者MANUAL-IMMEDIATE。两个若没有配合,消费者端就会报错。例如,我这里,当初值配置了enable-auto-commit为false,最后ack-mode没有配置,就出现下面的错误:

-- ::49.469  INFO  --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId : f10ef2720b03b247
-- ::49.541 INFO --- [ntainer#--C-] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator 10.90.2.102: (id: rack: null) for group kefu-logger.
-- ::49.543 INFO --- [ntainer#--C-] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [] for group kefu-logger
-- ::49.544 INFO --- [ntainer#--C-] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
-- ::49.544 INFO --- [ntainer#--C-] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group kefu-logger
-- ::49.557 INFO --- [ntainer#--C-] o.a.k.c.c.internals.AbstractCoordinator : Successfully joined group kefu-logger with generation
-- ::49.558 INFO --- [ntainer#--C-] o.a.k.c.c.internals.ConsumerCoordinator : Setting newly assigned partitions [kefuLogger-] for group kefu-logger
-- ::49.566 INFO --- [ntainer#--C-] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[kefuLogger-]
-- ::49.587 ERROR --- [ntainer#--L-] o.s.kafka.listener.LoggingErrorHandler : Error while processing: ConsumerRecord(topic = kefuLogger, partition = , offset = , CreateTime = , checksum = , serialized key size = -, serialized value size = , key = null, value = {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:08 CST 2018","content":"a89aca04-f6d0-4f70-93e9-fd0471165497:Wed Aug 01 19:45:08 CST 2018"}) org.springframework.kafka.listener.ListenerExecutionFailedException: Listener method could not be invoked with the incoming message
Endpoint handler details:
Method [public void com.roomdis.micros.kafka.consumer.MessageConsumer.onMessage(org.apache.kafka.clients.consumer.ConsumerRecord<java.lang.String, java.lang.String>,org.springframework.kafka.support.Acknowledgment)]
Bean [com.roomdis.micros.kafka.consumer.MessageConsumer@27068a50]; nested exception is org.springframework.messaging.converter.MessageConversionException: Cannot handle message; nested exception is org.springframework.messaging.converter.MessageConversionException: Cannot convert from [java.lang.String] to [org.springframework.kafka.support.Acknowledgment] for GenericMessage [payload={"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:08 CST 2018","content":"a89aca04-f6d0-4f70-93e9-fd0471165497:Wed Aug 01 19:45:08 CST 2018"}, headers={kafka_offset=, kafka_receivedMessageKey=null, kafka_receivedPartitionId=, kafka_receivedTopic=kefuLogger}], failedMessage=GenericMessage [payload={"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:08 CST 2018","content":"a89aca04-f6d0-4f70-93e9-fd0471165497:Wed Aug 01 19:45:08 CST 2018"}, headers={kafka_offset=, kafka_receivedMessageKey=null, kafka_receivedPartitionId=, kafka_receivedTopic=kefuLogger}]
at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:) ~[spring-kafka-1.1..RELEASE.jar:na]
at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:) ~[spring-kafka-1.1..RELEASE.jar:na]
at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:) ~[spring-kafka-1.1..RELEASE.jar:na]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:) [spring-kafka-1.1..RELEASE.jar:na]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:) [spring-kafka-1.1..RELEASE.jar:na]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.access$(KafkaMessageListenerContainer.java:) [spring-kafka-1.1..RELEASE.jar:na]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer$ListenerInvoker.run(KafkaMessageListenerContainer.java:) [spring-kafka-1.1..RELEASE.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:) [na:1.8.0_77]
at java.util.concurrent.FutureTask.run(FutureTask.java:) [na:1.8.0_77]
at java.lang.Thread.run(Thread.java:) [na:1.8.0_77]
Caused by: org.springframework.messaging.converter.MessageConversionException: Cannot handle message; nested exception is org.springframework.messaging.converter.MessageConversionException: Cannot convert from [java.lang.String] to [org.springframework.kafka.support.Acknowledgment] for GenericMessage [payload={"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:08 CST 2018","content":"a89aca04-f6d0-4f70-93e9-fd0471165497:Wed Aug 01 19:45:08 CST 2018"}, headers={kafka_offset=, kafka_receivedMessageKey=null, kafka_receivedPartitionId=, kafka_receivedTopic=kefuLogger}], failedMessage=GenericMessage [payload={"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:08 CST 2018","content":"a89aca04-f6d0-4f70-93e9-fd0471165497:Wed Aug 01 19:45:08 CST 2018"}, headers={kafka_offset=, kafka_receivedMessageKey=null, kafka_receivedPartitionId=, kafka_receivedTopic=kefuLogger}]
... common frames omitted
Caused by: org.springframework.messaging.converter.MessageConversionException: Cannot convert from [java.lang.String] to [org.springframework.kafka.support.Acknowledgment] for GenericMessage [payload={"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:08 CST 2018","content":"a89aca04-f6d0-4f70-93e9-fd0471165497:Wed Aug 01 19:45:08 CST 2018"}, headers={kafka_offset=, kafka_receivedMessageKey=null, kafka_receivedPartitionId=, kafka_receivedTopic=kefuLogger}]
at org.springframework.messaging.handler.annotation.support.PayloadArgumentResolver.resolveArgument(PayloadArgumentResolver.java:) ~[spring-messaging-4.3..RELEASE.jar:4.3..RELEASE]
at org.springframework.messaging.handler.invocation.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:) ~[spring-messaging-4.3..RELEASE.jar:4.3..RELEASE]
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:) ~[spring-messaging-4.3..RELEASE.jar:4.3..RELEASE]
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:) ~[spring-messaging-4.3..RELEASE.jar:4.3..RELEASE]
at org.springframework.kafka.listener.adapter.HandlerAdapter.invoke(HandlerAdapter.java:) ~[spring-kafka-1.1..RELEASE.jar:na]
at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:) ~[spring-kafka-1.1..RELEASE.jar:na]
... common frames omitted -- ::49.592 ERROR --- [ntainer#--L-] o.s.kafka.listener.LoggingErrorHandler : Error while processing: ConsumerRecord(topic = kefuLogger, partition = , offset = , CreateTime = , checksum = , serialized key size = -, serialized value size = , key = null, value = {"serverAddr":"10.90.9.20:8899","fullClassPath":"class com.roomdis.micros.kafka.KafkaApplication","messageTime":"Wed Aug 01 19:45:11 CST 2018","content":"f907347d-6582-452e-8bcb-4b4f490e5675:Wed Aug 01 19:45:11 CST 2018"})

补充说明ack-mode配置相关信息
官方说法,管enable-auto-commit为false的时候ackMode取值解释:

RECORD - commit the offset when the listener returns after processing the record.
BATCH - commit the offset when all the records returned by the poll() have been processed.
TIME - commit the offset when all the records returned by the poll() have been processed as long as the ackTime since the last commit has been exceeded.
COUNT - commit the offset when all the records returned by the poll() have been processed as long as ackCount records have been received since the last commit.
COUNT_TIME - similar to TIME and COUNT but the commit is performed if either condition is true.
MANUAL - the message listener is responsible to acknowledge() the Acknowledgment; after which, the same semantics as BATCH are applied.
MANUAL_IMMEDIATE - commit the offset immediately when the Acknowledgment.acknowledge() method is called by the listener.

下面是具体的配置操作,配合ackMode的取值,相关的参数设置:

spring.kafka.listener.ack-count= # Number of records between offset commits when ackMode is "COUNT" or "COUNT_TIME".
spring.kafka.listener.ack-mode= # Listener AckMode. See the spring-kafka documentation.
spring.kafka.listener.ack-time= # Time between offset commits when ackMode is "TIME" or "COUNT_TIME".
spring.kafka.listener.concurrency= # Number of threads to run in the listener containers.
spring.kafka.listener.poll-timeout= # Timeout to use when polling the consumer.
spring.kafka.listener.type=single # Listener type.

最后,任何一个新的技术应用到实际生产,都必须弄清楚每一个关键环节,否则风险或者灾难的产生只是迟早的事情

Spring生态研习【三】:Spring-kafka的更多相关文章

  1. Spring生态研习【二】:SpEL(Spring Expression Language)

    1. SpEL功能简介 它是spring生态里面的一个功能强大的描述语言,支在在运行期间对象图里面的数据查询和数据操作.语法和标准的EL一样,但是支持一些额外的功能特性,最显著的就是方法调用以及基本字 ...

  2. Spring生态研习【一】:定时任务Spring-task

    本系列具体研究一下spring生态中的重要或者常用的功能套件,今天从定时任务开始,主要是spring-task.至于quartz,下次找个时间再总结. 我的验证环境,是SpringCloud体系下,基 ...

  3. Spring Boot系列(三) Spring Boot 之 JDBC

    数据源 类型 javax.sql.DataSource javax.sql.XADataSource org.springframework.jdbc.datasource.embedded,Enbe ...

  4. Spring生态研习【四】:Springboot+mybatis(探坑记)

    这里主要是介绍在springboot里面通过xml的方式进行配置,因为xml的配置相对后台复杂的系统来说,能够使得系统的配置和逻辑实现分离,避免配置和代码逻辑过度耦合,xml的配置模式能够最大限度的实 ...

  5. Spring生态研习【五】:Springboot中bean的条件注入

    在springboot中,开发的确变的简单了很多,但是,开发者现在希望开发傻瓜式的方便搞定项目中的各种奇怪的需求最好了,不用烧脑,本来程序猿的生活就是枯燥的,不要再给自己添加更多的烦恼. 今天,就为了 ...

  6. spring深入学习(三)-----spring容器内幕

    之前都是说了怎么配置bean以及用法之类的,这篇博文来介绍下spring容器内幕. 内部容器工作机制 Spring中AbstractApplicationContext抽象类的refresh()方法是 ...

  7. Spring生态简介

    目录 概述 项目说明 主要项目 社区项目 保留项目 最后总结 概述 做Java开发的人一提起Spring,首先在脑海中浮现出的就是"IoC","AOP",&qu ...

  8. 再也不担心写出臃肿的Flink流处理程序啦,发现一款将Flink与Spring生态完美融合的脚手架工程-懒松鼠Flink-Boot

    目录 你可能面临如下苦恼: 接口缓存 重试机制 Bean校验 等等...... 它为流计算开发工程师解决了 有了它你的代码就像这样子: 仓库地址:懒松鼠Flink-Boot 1. 组织结构 2. 技术 ...

  9. 懒松鼠Flink-Boot(Flink+Spring):一款将Flink与Spring生态完美融合的脚手架工程

    目录 你可能面临如下苦恼: 接口缓存 重试机制 Bean校验 等等...... 它为流计算开发工程师解决了 有了它你的代码就像这样子: 仓库地址:懒松鼠Flink-Boot 1. 组织结构 2. 技术 ...

随机推荐

  1. net基础语法

    一.net基础语法流程图

  2. 『高性能模型』深度可分离卷积和MobileNet_v1

    论文原址:MobileNets v1 TensorFlow实现:mobilenet_v1.py TensorFlow预训练模型:mobilenet_v1.md 一.深度可分离卷积 标准的卷积过程可以看 ...

  3. Hyperledger fabric-sdk-java Basics Tutorial(转)

    原文地址:Hyperledger fabric-sdk-java Basics Tutorial This quick tutorial is for all Java developers, who ...

  4. XSS理解与防御

    一.说明 我说我不理解为什么别人做得出来我做不出来,比如这里要说的XSS我觉得很多人就不了解其定义和原理的,在不了解定义和原理的背景下他们可以拿站,这让人怎么理解呢.那时我最怕两个问题,第一个是题目做 ...

  5. 泥瓦工vps

    http://blog.sina.com.cn/s/blog_16a3cb7cb0102xbvd.html

  6. Python *Mix_w6

    is 和 == 小数据池 python中有两个数据类型存在小数据池:数字int范围 -5 ~ 256 字符串中如果有特殊字符+ - * / @ 等等,他们的内存地址就可能不一样 字符串中单个*20以内 ...

  7. Python-接口自动化(二)

    python基础知识(二) (二)常用控制流 1.控制语句 分支语句:起到一个分支分流的作用,类似马路上的红绿灯 循环语句:for while 可以使代码不断重复的执行 2.判断语句:关键字是if.. ...

  8. .net core WebApi Mutex实现并发同步

    Mutex,中文译为互斥体,在.net中也是作为一种线程或进程之间的互斥体存在.即在同一时刻,一个共享资源只允许被某一个线程或进程访问,其他线程或进程需要等待(直至获取互斥锁为止). Mutex的使用 ...

  9. iOS 在工程内部创建一个静态库target

    当你在开发项目的时候需要把公用的东西打包出来,其他项目方便使用的时候,打包成静态库是你的最优选择,在工程内部开发的时候新建一个target进行静态库的开发可以使你的开发调试更加方便而不是单独新建一个工 ...

  10. jquery常用实例

    $("#returnTop").click(function () { var speed=200;//滑动的速度 $('body,html').animate({ scrollT ...