Kafka 消费者相关配置
消费者相关配置类为 org.apache.kafka.clients.consumer.ConsumerConfig
具有以下配置参数
1. GROUP_ID_CONFIG = "group.id";
消费者分组ID,分组内的消费者只能消费该消息一次,不同分组内的消费者可以重复消费该消息。简单讲就是一条消息会被发送到不同的分组,每个分组是否消费该消息不会互相影响,但是,分组内的消息只能被其中一个消费者消费一次。Kafka利用这个分组来实现单播和多播的功能。 2. MAX_POLL_RECORDS_CONFIG = "max.poll.records";
消费者每次poll数据时的最大数量。 3. MAX_POLL_INTERVAL_MS_CONFIG = "max.poll.interval.ms";
The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. "; 4. SESSION_TIMEOUT_MS_CONFIG = "session.timeout.ms"
会话响应的时间,超过这个时间kafka可以选择放弃消费或者消费下一条消息
"The timeout used to detect consumer failures when using Kafka's group management facility. The consumer sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this consumer from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by <code>group.min.session.timeout.ms</code> and <code>group.max.session.timeout.ms</code>.";5. HEARTBEAT_INTERVAL_MS_CONFIG = "heartbeat.interval.ms";
The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than <code>session.timeout.ms</code>, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances."; 6. BOOTSTRAP_SERVERS_CONFIG = "bootstrap.servers";
broker 地址 "host1:port1,host2:port2",-->"localhost:9092,localhost:9093";
7. CLIENT_DNS_LOOKUP_CONFIG = "client.dns.lookup"; 8. ENABLE_AUTO_COMMIT_CONFIG = "enable.auto.commit";
为true则自动提交偏移量 9. AUTO_COMMIT_INTERVAL_MS_CONFIG = "auto.commit.interval.ms";
自动提交偏移量周期 10. PARTITION_ASSIGNMENT_STRATEGY_CONFIG = "partition.assignment.strategy";
The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used"; 11. AUTO_OFFSET_RESET_CONFIG = "auto.offset.reset";
重置消费偏移量策略
当偏移量没有初始值时或者参数非法时,比如数据被删除时的重置策略。
earliest:automatically reset the offset to the earliest offset
latest:automatically reset the offset to the latest offset
其他:抛出异常
<ul><li>earliest: <li>latest: </li>
<li>none: throw exception to the consumer if no previous offset is found for the consumer's group</li><li>anything else: throw exception to the consumer.</li></ul>"; 12. FETCH_MIN_BYTES_CONFIG = "fetch.min.bytes";
The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency."; 13. FETCH_MAX_BYTES_CONFIG = "fetch.max.bytes";
The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). Note that the consumer performs multiple fetches in parallel."; 14. DEFAULT_FETCH_MAX_BYTES = 52428800; 15. FETCH_MAX_WAIT_MS_CONFIG = "fetch.max.wait.ms";
The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes."; 16. METADATA_MAX_AGE_CONFIG = "metadata.max.age.ms"; 17. MAX_PARTITION_FETCH_BYTES_CONFIG = "max.partition.fetch.bytes";
The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). See fetch.max.bytes for limiting the consumer request size."; 18. DEFAULT_MAX_PARTITION_FETCH_BYTES = 1048576; 19. SEND_BUFFER_CONFIG = "send.buffer.bytes"; 20. RECEIVE_BUFFER_CONFIG = "receive.buffer.bytes"; 21. CLIENT_ID_CONFIG = "client.id"; 22. RECONNECT_BACKOFF_MS_CONFIG = "reconnect.backoff.ms"; 23. RECONNECT_BACKOFF_MAX_MS_CONFIG = "reconnect.backoff.max.ms"; 24. RETRY_BACKOFF_MS_CONFIG = "retry.backoff.ms"; 25. METRICS_SAMPLE_WINDOW_MS_CONFIG = "metrics.sample.window.ms"; 26. METRICS_NUM_SAMPLES_CONFIG = "metrics.num.samples"; 27. METRICS_RECORDING_LEVEL_CONFIG = "metrics.recording.level"; 28. METRIC_REPORTER_CLASSES_CONFIG = "metric.reporters"; 29. CHECK_CRCS_CONFIG = "check.crcs";
Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance."; 30. KEY_DESERIALIZER_CLASS_CONFIG = "key.deserializer";
KEY序列化类配置 31. VALUE_DESERIALIZER_CLASS_CONFIG = "value.deserializer";
VALUE序列化类配置 32. CONNECTIONS_MAX_IDLE_MS_CONFIG = "connections.max.idle.ms"; 33. String REQUEST_TIMEOUT_MS_CONFIG = "request.timeout.ms";
REQUEST_TIMEOUT_MS_DOC = "The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted."; 34. DEFAULT_API_TIMEOUT_MS_CONFIG = "default.api.timeout.ms";
Specifies the timeout (in milliseconds) for consumer APIs that could block. This configuration is used as the default timeout for all consumer operations that do not explicitly accept a <code>timeout</code> parameter."; 35. INTERCEPTOR_CLASSES_CONFIG = "interceptor.classes";
A list of classes to use as interceptors. Implementing the <code>org.apache.kafka.clients.consumer.ConsumerInterceptor</code> interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors."; 36. EXCLUDE_INTERNAL_TOPICS_CONFIG = "exclude.internal.topics";
Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to <code>true</code> the only way to receive records from an internal topic is subscribing to it."; 37. DEFAULT_EXCLUDE_INTERNAL_TOPICS = true; 38. LEAVE_GROUP_ON_CLOSE_CONFIG = "internal.leave.group.on.close"; 39. ISOLATION_LEVEL_CONFIG = "isolation.level";
<p>Controls how to read messages written transactionally. If set to <code>read_committed</code>, consumer.poll() will only return transactional messages which have been committed. If set to <code>read_uncommitted</code>' (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode.</p> <p>Messages will always be returned in offset order. Hence, in <code>read_committed</code> mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, <code>read_committed</code> consumers will not be able to read up to the high watermark when there are in flight transactions.</p><p> Further, when in <code>read_committed</code> the seekToEnd method will return the LSO"; 40. DEFAULT_ISOLATION_LEVEL;
Kafka 消费者相关配置的更多相关文章
- Spring Boot 自定义kafka 消费者配置 ContainerFactory最佳实践
Spring Boot 自定义kafka 消费者配置 ContainerFactory最佳实践 本篇博文主要提供一个在 SpringBoot 中自定义 kafka配置的实践,想象这样一个场景:你的系统 ...
- Kafka 消费者
应用从Kafka中读取数据需要使用KafkaConsumer订阅主题,然后接收这些主题的消息.在我们深入这些API之前,先来看下几个比较重要的概念. Kafka消费者相关的概念 消费者与消费组 假设这 ...
- kafka 相关配置
kafka主要配置包括三类:broker configuration,producer configuration and consumer configuration. Broker Config ...
- Kafka消费者-从Kafka读取数据
(1)Customer和Customer Group (1)两种常用的消息模型 队列模型(queuing)和发布-订阅模型(publish-subscribe). 队列的处理方式是一组消费者从服务器读 ...
- Kafka权威指南 读书笔记之(四)Kafka 消费者一一从 Kafka读取数据
KafkaConsumer概念 消费者和消费者群组 Kafka 消费者从属于消费者群组.一个群组里的消费者订阅的是同一个主题,每个消费者接收主题一部分分区的消息. 往群组里增加消费者是横向伸缩消费能力 ...
- Kafka消费者APi
Kafka客户端从集群中消费消息,并透明地处理kafka集群中出现故障服务器,透明地调节适应集群中变化的数据分区.也和服务器交互,平衡均衡消费者. public class KafkaConsumer ...
- kafka消费者客户端
Kafka消费者 1.1 消费者与消费者组 消费者与消费者组之间的关系 每一个消费者都隶属于某一个消费者组,一个消费者组可以包含一个或多个消费者,每一条消息只会被消费者组中的某一个消费者所消费.不 ...
- kafka搭建相关可能出现的bug
在Kafka搭建时,首先安装zookeeper,新版本直接解压,启动就好了.由于什么原因,在虚拟机下,必须用root账户启动zookeeper,不然其中一个文件由于没有权限无法创建,导致zookeep ...
- java架构之路(MQ专题)kafka集群配置和简单使用
前面我们说了RabbitMQ和RocketMQ的安装和简单的使用,这次我们说一下Kafka的安装配置,后面我会用几个真实案例来说一下MQ的真实使用场景.天冷了,不愿意伸手,最近没怎么写博客了,还请见谅 ...
随机推荐
- Containerd 简介
我们可以把 docker 抽象为下图所示的结构(此图来自互联网): 从图中可以看出,docker 对容器的管理和操作基本都是通过 containerd 完成的. 那么,containerd 是什么呢? ...
- web优化(二)
上次说到js的阻塞dom渲染可能出现的白屏现象,所以对于js我们需要一些优化.首先我们可以模仿通信中的时分的概念,使用 setTime()来执行一段js代码然后渲染页面然后再执行一段js代码,这样可以 ...
- spring创建bean及数据注入
通过spring的IoC可以实现由配置文件来创建类的对象,可以降低类鱼类之间的耦合, 通常我们都是在代码中控制对象的生成和属性注入,而使用IoC后,就可以将设计好的类交给IoC容器,让容器去控制对象的 ...
- scapyd部署出现的问题的解决方案
使用scrapyd-deploy部署时,发现spiders为0的排查,首先用 scrapy list 看一下是否可以识别 windows下 scrapyd-deploy无后缀文件不能启动: 解决方案一 ...
- bash: scrapy: command not found
一.场景 执行 pip install scrapy 后,安装成功且执行 import scrapy 成功 二.问题 在shell中执行 scrapy 返回 bash: scrapy: comma ...
- ESXI的安装和部署
1. 实验拓扑图: 2. 实验要求 (1) 新建一台exsi主机,安装exsi5.5系统. 步骤: 1)新建虚拟机,导入光盘. 2)安装esxi系统 (2)在exsi主机中,配置IP地址为1 ...
- java equals和tostring
Object类概述 是所有类中的父类,最大的超类,所有的类都继承他. equals方法 比较2个对象是否相同,其实他是在比较两个对象的地址是否相同,在equals方法中我们用==来判断 但是比较2个地 ...
- C++中使用引用作为函数参数的优点
1.传递引用给函数与传递指针的效果是一样的.这时,被调函数的形参就成为原来主调函数中的实参变量或对象的一个别名来使用,所以在被调函数中对形参变量的操作就是对其相应的目标 对象(在主调函数中)的操作. ...
- MYSQL一键安装
#!/bin/bash #baishuchao qq:995345781 ############################################################### ...
- 深度学习之循环神经网络(RNN)
循环神经网络(Recurrent Neural Network,RNN)是一类具有短期记忆能力的神经网络,适合用于处理视频.语音.文本等与时序相关的问题.在循环神经网络中,神经元不但可以接收其他神经元 ...