Kafka 消费者相关配置
消费者相关配置类为 org.apache.kafka.clients.consumer.ConsumerConfig
具有以下配置参数
1. GROUP_ID_CONFIG = "group.id";
消费者分组ID,分组内的消费者只能消费该消息一次,不同分组内的消费者可以重复消费该消息。简单讲就是一条消息会被发送到不同的分组,每个分组是否消费该消息不会互相影响,但是,分组内的消息只能被其中一个消费者消费一次。Kafka利用这个分组来实现单播和多播的功能。 2. MAX_POLL_RECORDS_CONFIG = "max.poll.records";
消费者每次poll数据时的最大数量。 3. MAX_POLL_INTERVAL_MS_CONFIG = "max.poll.interval.ms";
The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. "; 4. SESSION_TIMEOUT_MS_CONFIG = "session.timeout.ms"
会话响应的时间,超过这个时间kafka可以选择放弃消费或者消费下一条消息
"The timeout used to detect consumer failures when using Kafka's group management facility. The consumer sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this consumer from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by <code>group.min.session.timeout.ms</code> and <code>group.max.session.timeout.ms</code>.";5. HEARTBEAT_INTERVAL_MS_CONFIG = "heartbeat.interval.ms";
The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than <code>session.timeout.ms</code>, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances."; 6. BOOTSTRAP_SERVERS_CONFIG = "bootstrap.servers";
broker 地址 "host1:port1,host2:port2",-->"localhost:9092,localhost:9093";
7. CLIENT_DNS_LOOKUP_CONFIG = "client.dns.lookup"; 8. ENABLE_AUTO_COMMIT_CONFIG = "enable.auto.commit";
为true则自动提交偏移量 9. AUTO_COMMIT_INTERVAL_MS_CONFIG = "auto.commit.interval.ms";
自动提交偏移量周期 10. PARTITION_ASSIGNMENT_STRATEGY_CONFIG = "partition.assignment.strategy";
The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used"; 11. AUTO_OFFSET_RESET_CONFIG = "auto.offset.reset";
重置消费偏移量策略
当偏移量没有初始值时或者参数非法时,比如数据被删除时的重置策略。
earliest:automatically reset the offset to the earliest offset
latest:automatically reset the offset to the latest offset
其他:抛出异常
<ul><li>earliest: <li>latest: </li>
<li>none: throw exception to the consumer if no previous offset is found for the consumer's group</li><li>anything else: throw exception to the consumer.</li></ul>"; 12. FETCH_MIN_BYTES_CONFIG = "fetch.min.bytes";
The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency."; 13. FETCH_MAX_BYTES_CONFIG = "fetch.max.bytes";
The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). Note that the consumer performs multiple fetches in parallel."; 14. DEFAULT_FETCH_MAX_BYTES = 52428800; 15. FETCH_MAX_WAIT_MS_CONFIG = "fetch.max.wait.ms";
The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes."; 16. METADATA_MAX_AGE_CONFIG = "metadata.max.age.ms"; 17. MAX_PARTITION_FETCH_BYTES_CONFIG = "max.partition.fetch.bytes";
The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). See fetch.max.bytes for limiting the consumer request size."; 18. DEFAULT_MAX_PARTITION_FETCH_BYTES = 1048576; 19. SEND_BUFFER_CONFIG = "send.buffer.bytes"; 20. RECEIVE_BUFFER_CONFIG = "receive.buffer.bytes"; 21. CLIENT_ID_CONFIG = "client.id"; 22. RECONNECT_BACKOFF_MS_CONFIG = "reconnect.backoff.ms"; 23. RECONNECT_BACKOFF_MAX_MS_CONFIG = "reconnect.backoff.max.ms"; 24. RETRY_BACKOFF_MS_CONFIG = "retry.backoff.ms"; 25. METRICS_SAMPLE_WINDOW_MS_CONFIG = "metrics.sample.window.ms"; 26. METRICS_NUM_SAMPLES_CONFIG = "metrics.num.samples"; 27. METRICS_RECORDING_LEVEL_CONFIG = "metrics.recording.level"; 28. METRIC_REPORTER_CLASSES_CONFIG = "metric.reporters"; 29. CHECK_CRCS_CONFIG = "check.crcs";
Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance."; 30. KEY_DESERIALIZER_CLASS_CONFIG = "key.deserializer";
KEY序列化类配置 31. VALUE_DESERIALIZER_CLASS_CONFIG = "value.deserializer";
VALUE序列化类配置 32. CONNECTIONS_MAX_IDLE_MS_CONFIG = "connections.max.idle.ms"; 33. String REQUEST_TIMEOUT_MS_CONFIG = "request.timeout.ms";
REQUEST_TIMEOUT_MS_DOC = "The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted."; 34. DEFAULT_API_TIMEOUT_MS_CONFIG = "default.api.timeout.ms";
Specifies the timeout (in milliseconds) for consumer APIs that could block. This configuration is used as the default timeout for all consumer operations that do not explicitly accept a <code>timeout</code> parameter."; 35. INTERCEPTOR_CLASSES_CONFIG = "interceptor.classes";
A list of classes to use as interceptors. Implementing the <code>org.apache.kafka.clients.consumer.ConsumerInterceptor</code> interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors."; 36. EXCLUDE_INTERNAL_TOPICS_CONFIG = "exclude.internal.topics";
Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to <code>true</code> the only way to receive records from an internal topic is subscribing to it."; 37. DEFAULT_EXCLUDE_INTERNAL_TOPICS = true; 38. LEAVE_GROUP_ON_CLOSE_CONFIG = "internal.leave.group.on.close"; 39. ISOLATION_LEVEL_CONFIG = "isolation.level";
<p>Controls how to read messages written transactionally. If set to <code>read_committed</code>, consumer.poll() will only return transactional messages which have been committed. If set to <code>read_uncommitted</code>' (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode.</p> <p>Messages will always be returned in offset order. Hence, in <code>read_committed</code> mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, <code>read_committed</code> consumers will not be able to read up to the high watermark when there are in flight transactions.</p><p> Further, when in <code>read_committed</code> the seekToEnd method will return the LSO"; 40. DEFAULT_ISOLATION_LEVEL;
Kafka 消费者相关配置的更多相关文章
- Spring Boot 自定义kafka 消费者配置 ContainerFactory最佳实践
Spring Boot 自定义kafka 消费者配置 ContainerFactory最佳实践 本篇博文主要提供一个在 SpringBoot 中自定义 kafka配置的实践,想象这样一个场景:你的系统 ...
- Kafka 消费者
应用从Kafka中读取数据需要使用KafkaConsumer订阅主题,然后接收这些主题的消息.在我们深入这些API之前,先来看下几个比较重要的概念. Kafka消费者相关的概念 消费者与消费组 假设这 ...
- kafka 相关配置
kafka主要配置包括三类:broker configuration,producer configuration and consumer configuration. Broker Config ...
- Kafka消费者-从Kafka读取数据
(1)Customer和Customer Group (1)两种常用的消息模型 队列模型(queuing)和发布-订阅模型(publish-subscribe). 队列的处理方式是一组消费者从服务器读 ...
- Kafka权威指南 读书笔记之(四)Kafka 消费者一一从 Kafka读取数据
KafkaConsumer概念 消费者和消费者群组 Kafka 消费者从属于消费者群组.一个群组里的消费者订阅的是同一个主题,每个消费者接收主题一部分分区的消息. 往群组里增加消费者是横向伸缩消费能力 ...
- Kafka消费者APi
Kafka客户端从集群中消费消息,并透明地处理kafka集群中出现故障服务器,透明地调节适应集群中变化的数据分区.也和服务器交互,平衡均衡消费者. public class KafkaConsumer ...
- kafka消费者客户端
Kafka消费者 1.1 消费者与消费者组 消费者与消费者组之间的关系 每一个消费者都隶属于某一个消费者组,一个消费者组可以包含一个或多个消费者,每一条消息只会被消费者组中的某一个消费者所消费.不 ...
- kafka搭建相关可能出现的bug
在Kafka搭建时,首先安装zookeeper,新版本直接解压,启动就好了.由于什么原因,在虚拟机下,必须用root账户启动zookeeper,不然其中一个文件由于没有权限无法创建,导致zookeep ...
- java架构之路(MQ专题)kafka集群配置和简单使用
前面我们说了RabbitMQ和RocketMQ的安装和简单的使用,这次我们说一下Kafka的安装配置,后面我会用几个真实案例来说一下MQ的真实使用场景.天冷了,不愿意伸手,最近没怎么写博客了,还请见谅 ...
随机推荐
- 关于Linux虚拟化技术KVM的科普 科普四(From humjb_1983)
另一组关于KVM的分析文档,虚拟化相关概念.KVM基本原理和架构一-概念和术语.KVM基本原理和架构二-基本原理.KVM基本原理及架构三-CPU虚拟化.KVM基本原理及架构四-内存虚拟化.KVM基本原 ...
- Azure Go Management SDK 中国版使用示例
简介 刚学习go几天,尝试调用Azure的SDK进行管理API的操作,基本思路是基于注册的AD Application信息生成token,然后再使用Token生成serviceClient,然后再进行 ...
- 关于checkpoint
Ⅰ.Checkpoint 1.1 checkpoint的作用 缩短数据库的回复时间 缓冲池不够用时,将脏页刷到磁盘 重做日志不可用时,刷新脏页 1.2 展开分析 page被缓存在bp中,page在bp ...
- TCP的延迟ACK机制
TCP的延迟ACK机制 TCP的延迟ACK机制一说到TCP,人们就喜欢开始扯三步握手之类的,那只是其中的一个环节而已.实际上每一个数据包的正确发送都是一个类似握手的过程,可以简单的把它视为两步握手.一 ...
- Golang Multipart File Upload Example
http://matt.aimonetti.net/posts/2013/07/01/golang-multipart-file-upload-example/ The Go language is ...
- Day1----Python学习之路笔记(1)
学习路线 Day1 Day2 Day3 Day4 Day5 ...待续 一.了解开发语言 1.高级语言:Python,Java,C++,C#,PHP,JS,Go,Ruby,SQL,Swift,Perl ...
- itchat库初探--微信好友全头像的拼接
代码: import itchat import math import PIL.Image as Image import os itchat.auto_login() friends = ...
- bash: scrapy: command not found
一.场景 执行 pip install scrapy 后,安装成功且执行 import scrapy 成功 二.问题 在shell中执行 scrapy 返回 bash: scrapy: comma ...
- C++中常用到的容器
这里主要讲C++中经常用到的一些保存数据的容器,其中也会介绍string. 在C++11中提到了很多容器,这里主要介绍:vector.list.map.还有一些其他的容器就不做介绍了. 1.Strin ...
- C++中函数重载和函数覆盖的区别
C++中经常会用到函数的重载和覆盖,二者也在很多场合都拿出来进行比较,这里我就对二者的区别做点总结: 函数重载: 函数重载指的是函数名相同.函数特征值不同的一些函数,这里函数的特征值指的是函数的参数的 ...