Kafka 消费者相关配置
消费者相关配置类为 org.apache.kafka.clients.consumer.ConsumerConfig
具有以下配置参数
1. GROUP_ID_CONFIG = "group.id";
消费者分组ID,分组内的消费者只能消费该消息一次,不同分组内的消费者可以重复消费该消息。简单讲就是一条消息会被发送到不同的分组,每个分组是否消费该消息不会互相影响,但是,分组内的消息只能被其中一个消费者消费一次。Kafka利用这个分组来实现单播和多播的功能。 2. MAX_POLL_RECORDS_CONFIG = "max.poll.records";
消费者每次poll数据时的最大数量。 3. MAX_POLL_INTERVAL_MS_CONFIG = "max.poll.interval.ms";
The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. "; 4. SESSION_TIMEOUT_MS_CONFIG = "session.timeout.ms"
会话响应的时间,超过这个时间kafka可以选择放弃消费或者消费下一条消息
"The timeout used to detect consumer failures when using Kafka's group management facility. The consumer sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this consumer from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by <code>group.min.session.timeout.ms</code> and <code>group.max.session.timeout.ms</code>.";5. HEARTBEAT_INTERVAL_MS_CONFIG = "heartbeat.interval.ms";
The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than <code>session.timeout.ms</code>, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances."; 6. BOOTSTRAP_SERVERS_CONFIG = "bootstrap.servers";
broker 地址 "host1:port1,host2:port2",-->"localhost:9092,localhost:9093";
7. CLIENT_DNS_LOOKUP_CONFIG = "client.dns.lookup"; 8. ENABLE_AUTO_COMMIT_CONFIG = "enable.auto.commit";
为true则自动提交偏移量 9. AUTO_COMMIT_INTERVAL_MS_CONFIG = "auto.commit.interval.ms";
自动提交偏移量周期 10. PARTITION_ASSIGNMENT_STRATEGY_CONFIG = "partition.assignment.strategy";
The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used"; 11. AUTO_OFFSET_RESET_CONFIG = "auto.offset.reset";
重置消费偏移量策略
当偏移量没有初始值时或者参数非法时,比如数据被删除时的重置策略。
earliest:automatically reset the offset to the earliest offset
latest:automatically reset the offset to the latest offset
其他:抛出异常
<ul><li>earliest: <li>latest: </li>
<li>none: throw exception to the consumer if no previous offset is found for the consumer's group</li><li>anything else: throw exception to the consumer.</li></ul>"; 12. FETCH_MIN_BYTES_CONFIG = "fetch.min.bytes";
The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency."; 13. FETCH_MAX_BYTES_CONFIG = "fetch.max.bytes";
The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). Note that the consumer performs multiple fetches in parallel."; 14. DEFAULT_FETCH_MAX_BYTES = 52428800; 15. FETCH_MAX_WAIT_MS_CONFIG = "fetch.max.wait.ms";
The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes."; 16. METADATA_MAX_AGE_CONFIG = "metadata.max.age.ms"; 17. MAX_PARTITION_FETCH_BYTES_CONFIG = "max.partition.fetch.bytes";
The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). See fetch.max.bytes for limiting the consumer request size."; 18. DEFAULT_MAX_PARTITION_FETCH_BYTES = 1048576; 19. SEND_BUFFER_CONFIG = "send.buffer.bytes"; 20. RECEIVE_BUFFER_CONFIG = "receive.buffer.bytes"; 21. CLIENT_ID_CONFIG = "client.id"; 22. RECONNECT_BACKOFF_MS_CONFIG = "reconnect.backoff.ms"; 23. RECONNECT_BACKOFF_MAX_MS_CONFIG = "reconnect.backoff.max.ms"; 24. RETRY_BACKOFF_MS_CONFIG = "retry.backoff.ms"; 25. METRICS_SAMPLE_WINDOW_MS_CONFIG = "metrics.sample.window.ms"; 26. METRICS_NUM_SAMPLES_CONFIG = "metrics.num.samples"; 27. METRICS_RECORDING_LEVEL_CONFIG = "metrics.recording.level"; 28. METRIC_REPORTER_CLASSES_CONFIG = "metric.reporters"; 29. CHECK_CRCS_CONFIG = "check.crcs";
Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance."; 30. KEY_DESERIALIZER_CLASS_CONFIG = "key.deserializer";
KEY序列化类配置 31. VALUE_DESERIALIZER_CLASS_CONFIG = "value.deserializer";
VALUE序列化类配置 32. CONNECTIONS_MAX_IDLE_MS_CONFIG = "connections.max.idle.ms"; 33. String REQUEST_TIMEOUT_MS_CONFIG = "request.timeout.ms";
REQUEST_TIMEOUT_MS_DOC = "The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted."; 34. DEFAULT_API_TIMEOUT_MS_CONFIG = "default.api.timeout.ms";
Specifies the timeout (in milliseconds) for consumer APIs that could block. This configuration is used as the default timeout for all consumer operations that do not explicitly accept a <code>timeout</code> parameter."; 35. INTERCEPTOR_CLASSES_CONFIG = "interceptor.classes";
A list of classes to use as interceptors. Implementing the <code>org.apache.kafka.clients.consumer.ConsumerInterceptor</code> interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors."; 36. EXCLUDE_INTERNAL_TOPICS_CONFIG = "exclude.internal.topics";
Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to <code>true</code> the only way to receive records from an internal topic is subscribing to it."; 37. DEFAULT_EXCLUDE_INTERNAL_TOPICS = true; 38. LEAVE_GROUP_ON_CLOSE_CONFIG = "internal.leave.group.on.close"; 39. ISOLATION_LEVEL_CONFIG = "isolation.level";
<p>Controls how to read messages written transactionally. If set to <code>read_committed</code>, consumer.poll() will only return transactional messages which have been committed. If set to <code>read_uncommitted</code>' (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode.</p> <p>Messages will always be returned in offset order. Hence, in <code>read_committed</code> mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, <code>read_committed</code> consumers will not be able to read up to the high watermark when there are in flight transactions.</p><p> Further, when in <code>read_committed</code> the seekToEnd method will return the LSO"; 40. DEFAULT_ISOLATION_LEVEL;
Kafka 消费者相关配置的更多相关文章
- Spring Boot 自定义kafka 消费者配置 ContainerFactory最佳实践
Spring Boot 自定义kafka 消费者配置 ContainerFactory最佳实践 本篇博文主要提供一个在 SpringBoot 中自定义 kafka配置的实践,想象这样一个场景:你的系统 ...
- Kafka 消费者
应用从Kafka中读取数据需要使用KafkaConsumer订阅主题,然后接收这些主题的消息.在我们深入这些API之前,先来看下几个比较重要的概念. Kafka消费者相关的概念 消费者与消费组 假设这 ...
- kafka 相关配置
kafka主要配置包括三类:broker configuration,producer configuration and consumer configuration. Broker Config ...
- Kafka消费者-从Kafka读取数据
(1)Customer和Customer Group (1)两种常用的消息模型 队列模型(queuing)和发布-订阅模型(publish-subscribe). 队列的处理方式是一组消费者从服务器读 ...
- Kafka权威指南 读书笔记之(四)Kafka 消费者一一从 Kafka读取数据
KafkaConsumer概念 消费者和消费者群组 Kafka 消费者从属于消费者群组.一个群组里的消费者订阅的是同一个主题,每个消费者接收主题一部分分区的消息. 往群组里增加消费者是横向伸缩消费能力 ...
- Kafka消费者APi
Kafka客户端从集群中消费消息,并透明地处理kafka集群中出现故障服务器,透明地调节适应集群中变化的数据分区.也和服务器交互,平衡均衡消费者. public class KafkaConsumer ...
- kafka消费者客户端
Kafka消费者 1.1 消费者与消费者组 消费者与消费者组之间的关系 每一个消费者都隶属于某一个消费者组,一个消费者组可以包含一个或多个消费者,每一条消息只会被消费者组中的某一个消费者所消费.不 ...
- kafka搭建相关可能出现的bug
在Kafka搭建时,首先安装zookeeper,新版本直接解压,启动就好了.由于什么原因,在虚拟机下,必须用root账户启动zookeeper,不然其中一个文件由于没有权限无法创建,导致zookeep ...
- java架构之路(MQ专题)kafka集群配置和简单使用
前面我们说了RabbitMQ和RocketMQ的安装和简单的使用,这次我们说一下Kafka的安装配置,后面我会用几个真实案例来说一下MQ的真实使用场景.天冷了,不愿意伸手,最近没怎么写博客了,还请见谅 ...
随机推荐
- Android 进阶 教你打造 Android 中的 IOC 框架 【ViewInject】 (上)
转载请标明出处:http://blog.csdn.net/lmj623565791/article/details/39269193,本文出自:[张鸿洋的博客] 1.概述 首先我们来吹吹牛,什么叫Io ...
- Intellij Idea中如何debug本地maven项目
方法一:使用maven中的jetty插件调试本地maven项目 1.打断点 2.右击"jetty:run",选择Debug运行 3.浏览器发送http请求,开始调试 方法二:利用远 ...
- bzoj4067 [Ctsc2015]gender
好神的一道题啊! 我们发现题目中的ln的贡献非常傻逼,但是我们可以发现这个东西的取值只有40个左右,于是我们可以枚举他! 枚举完了对于题里的贡献就是一个普通的最小割,采用的是文理分科的思想,与S连代表 ...
- BZOJ_3083_遥远的国度_树链剖分+线段树
BZOJ_3083_遥远的国度_树链剖分 Description 描述 zcwwzdjn在追杀十分sb的zhx,而zhx逃入了一个遥远的国度.当zcwwzdjn准备进入遥远的国度继续追杀时,守护神Ra ...
- 再谈async与await
回顾C#5.0是如何进行异步编程的 static void Main(string[] args) { string url = "https://docs.microsoft.com/zh ...
- 毕业原版=[约克大学毕业证书]York原件一模一样证书
约克大学毕业证[微/Q:2544033233◆WeChat:CC6669834]UC毕业证书/联系人Alice[查看点击百度快照查看][留信网学历认证&博士&硕士&海归& ...
- AT89S52最小系统
NC是NOT CONNECTED的缩写,即空脚. 芯片中NC引脚没有任何用途,只是限于封装形式,该引脚必须存在.
- postman接口测试举例情况
http请求:http请求分为请求头和请求体,get请求只有请求头没有请求体. 1.get请求 是可以直接在浏览器访问,不需要借助任何工具.好看一些,可以打开postman测试接口 http://xx ...
- 如何提高使用Java反射的效率?
前言 在我们平时的工作或者面试中,都会经常遇到“反射”这个知识点,通过“反射”我们可以动态的获取到对象的信息以及灵活的调用对象方法等,但是在使用的同时又伴随着另一种声音的出现,那就是“反射”很慢,要少 ...
- 工厂方法模式--java代码实现
工厂方法模式 工厂方法模式,对简单工厂模式进行了升级.我们将水果园比作一个工厂,在简单工厂模式下,水果园是一个具体的工厂,直接用来生产各种各样的水果.那么在工厂方法模式下,水果园是一个抽象工厂,那么苹 ...