kafak-python使用补充
- kafka-python的心跳报文使用的是一个独立的线程,以固定的时间(heartbeat_interval_ms,默认是3000ms)间隔发生心跳信息
- member_id唯一标识一个客户端的consumer
- 使用group模式下,在一个consumer连接的过程中,如果遇到有其他consumer加入或者退出同一个group,会触发group的rebalance操作,但是之前已经连接的consumer的member_id仍然保持不变,这样就保证了之前已经连接的consumer的稳定性
- consumer断开后重连会获取新的member_id
- rebalance期间,服务不可用
- 心跳的作用:一个是保活,保持当前会话持续可用;二个作用是当服务端有group rebalance信息的时候,可以及时获取这些信息
- 可以把django的所有日志都输出,这样就可以看到kafka-python和kafka服务器的交互过程的debug信息,进一步了解交互过程
遗留:
现象:机器A启动两个线程,两个consumer,消费kafka 服务器上的topic=test的2个partition,此时机器B启动一个线程,触发rebalance,会抢到一个partition
原因:据说优先按照机器分配,机器A已经有一个了,就优先把剩下的分配给机器B吧
举个例子:
kafka的topic=test group=xx有两个partition分区,如果只有一个consumer A,那么这个consumer A必须同时消费这两个分区;如果此时加入另一个consumer B,就可以分担一个分区的信息;如果后加入的consumer B退出这个group xx,剩余的consumer A又要消费这两个partition;同样如果consumer A停止消费,那么consumer B一样需要消费两个分区。而且一个分区只能被一个consumer消费一次。
配置信息:
"""Consume records from a Kafka cluster. The consumer will transparently handle the failure of servers in the Kafka
cluster, and adapt as topic-partitions are created or migrate between
brokers. It also interacts with the assigned kafka Group Coordinator node
to allow multiple consumers to load balance consumption of topics (requires
kafka >= 0.9.0.0). The consumer is not thread safe and should not be shared across threads. 不是线程安全的 Arguments:
*topics (str): optional list of topics to subscribe to. If not set,
call :meth:`~kafka.KafkaConsumer.subscribe` or
:meth:`~kafka.KafkaConsumer.assign` before consuming records. Keyword Arguments:
bootstrap_servers: 'host[:port]' string (or list of 'host[:port]'
strings) that the consumer should contact to bootstrap initial
cluster metadata. This does not have to be the full node list.
It just needs to have at least one broker that will respond to a
Metadata API Request. Default port is . If no servers are
specified, will default to localhost:.
client_id (str): A name for this client. This string is passed in
each request to servers and can be used to identify specific
server-side log entries that correspond to this client. Also
submitted to GroupCoordinator for logging with respect to
consumer group administration. Default: 'kafka-python-{version}'
group_id (str or None): The name of the consumer group to join for dynamic
partition assignment (if enabled), and to use for fetching and
committing offsets. If None, auto-partition assignment (via
group coordinator) and offset commits are disabled.
Default: None 使用subscribe必须使用这个参数
key_deserializer (callable): Any callable that takes a
raw message key and returns a deserialized key.
value_deserializer (callable): Any callable that takes a
raw message value and returns a deserialized value.
fetch_min_bytes (int): Minimum amount of data the server should
return for a fetch request, otherwise wait up to
fetch_max_wait_ms for more data to accumulate. Default: .
fetch_max_wait_ms (int): The maximum amount of time in milliseconds
the server will block before answering the fetch request if
there isn't sufficient data to immediately satisfy the
requirement given by fetch_min_bytes. Default: . 即使fetch_min_bytes未满足,但是fetch_max_wait_ms时间到了,也会立即返回
fetch_max_bytes (int): The maximum amount of data the server should
return for a fetch request. This is not an absolute maximum, if the
first message in the first non-empty partition of the fetch is
larger than this value, the message will still be returned to
ensure that the consumer can make progress. NOTE: consumer performs
fetches to multiple brokers in parallel so memory usage will depend
on the number of brokers containing partitions for the topic.
Supported Kafka version >= 0.10.1.0. Default: ( MB).
max_partition_fetch_bytes (int): The maximum amount of data
per-partition the server will return. The maximum total memory
used for a request = #partitions * max_partition_fetch_bytes.
This size must be at least as large as the maximum message size
the server allows or else it is possible for the producer to
send messages larger than the consumer can fetch. If that
happens, the consumer can get stuck trying to fetch a large
message on a certain partition. Default: .
request_timeout_ms (int): Client request timeout in milliseconds.
Default: .
retry_backoff_ms (int): Milliseconds to backoff when retrying on
errors. Default: .
reconnect_backoff_ms (int): The amount of time in milliseconds to
wait before attempting to reconnect to a given host.
Default: . 举例:backoff可能是这样一种机制:第一次连接失败,就等一段时间,如果失败,就延长等待时间再尝试,如果再失败,再延长等待时间
reconnect_backoff_max_ms (int): The maximum amount of time in
milliseconds to wait when reconnecting to a broker that has
repeatedly failed to connect. If provided, the backoff per host
will increase exponentially for each consecutive connection
failure, up to this maximum. To avoid connection storms, a
randomization factor of 0.2 will be applied to the backoff
resulting in a random range between % below and % above
the computed value. Default: .
max_in_flight_requests_per_connection (int): Requests are pipelined
to kafka brokers up to this number of maximum requests per
broker connection. Default: .
auto_offset_reset (str): A policy for resetting offsets on
OffsetOutOfRange errors: 'earliest' will move to the oldest
available message, 'latest' will move to the most recent. Any
other value will raise the exception. Default: 'latest'.
enable_auto_commit (bool): If True , the consumer's offset will be
periodically committed in the background. Default: True.
auto_commit_interval_ms (int): Number of milliseconds between automatic
offset commits, if enable_auto_commit is True. Default: .
default_offset_commit_callback (callable): Called as
callback(offsets, response) response will be either an Exception
or an OffsetCommitResponse struct. This callback can be used to
trigger custom actions when a commit request completes.
check_crcs (bool): Automatically check the CRC32 of the records
consumed. This ensures no on-the-wire or on-disk corruption to
the messages occurred. This check adds some overhead, so it may
be disabled in cases seeking extreme performance. Default: True
metadata_max_age_ms (int): The period of time in milliseconds after
which we force a refresh of metadata, even if we haven't seen any
partition leadership changes to proactively discover any new
brokers or partitions. Default:
partition_assignment_strategy (list): List of objects to use to
distribute partition ownership amongst consumer instances when
group management is used.
Default: [RangePartitionAssignor, RoundRobinPartitionAssignor]
max_poll_records (int): The maximum number of records returned in a
single call to :meth:`~kafka.KafkaConsumer.poll`. Default:
max_poll_interval_ms (int): The maximum delay between invocations of
:meth:`~kafka.KafkaConsumer.poll` when using consumer group
management. This places an upper bound on the amount of time that
the consumer can be idle before fetching more records. If
:meth:`~kafka.KafkaConsumer.poll` is not called before expiration
of this timeout, then the consumer is considered failed and the
group will rebalance in order to reassign the partitions to another
member. Default 300000 两次poll执行的时间间隔,如果达到这个间隔,就会引起这个consumer failed和group rebalance,group会把这个consumer连接的分区分配给其他consumer。
session_timeout_ms (int): The timeout used to detect failures when
using Kafka's group management facilities. The consumer sends
periodic heartbeats to indicate its liveness to the broker. If
no heartbeats are received by the broker before the expiration of
this session timeout, then the broker will remove this consumer
from the group and initiate a rebalance. Note that the value must
be in the allowable range as configured in the broker configuration
by group.min.session.timeout.ms and group.max.session.timeout.ms.
Default: 10000 kafka服务器如果在超过会话时间后,还没有收到客户端发送的心跳报文,就会移除这个consumer,进行rebalance,把分区分配给其他消费者。
heartbeat_interval_ms (int): The expected time in milliseconds
between heartbeats to the consumer coordinator when using
Kafka's group management facilities. Heartbeats are used to ensure
that the consumer's session stays active and to facilitate
rebalancing when new consumers join or leave the group. The
value must be set lower than session_timeout_ms, but typically
should be set no higher than / of that value. It can be
adjusted even lower to control the expected time for normal
rebalances. Default: 3000 心跳报文时间间隔
receive_buffer_bytes (int): The size of the TCP receive buffer
(SO_RCVBUF) to use when reading data. Default: None (relies on
system defaults). The java client defaults to .
send_buffer_bytes (int): The size of the TCP send buffer
(SO_SNDBUF) to use when sending data. Default: None (relies on
system defaults). The java client defaults to .
socket_options (list): List of tuple-arguments to socket.setsockopt
to apply to broker connection sockets. Default:
[(socket.IPPROTO_TCP, socket.TCP_NODELAY, )]
consumer_timeout_ms (int): number of milliseconds to block during
message iteration before raising StopIteration (i.e., ending the
iterator). Default block forever [float('inf')].
skip_double_compressed_messages (bool): A bug in KafkaProducer <= 1.2.
caused some messages to be corrupted via double-compression.
By default, the fetcher will return these messages as a compressed
blob of bytes with a single offset, i.e. how the message was
actually published to the cluster. If you prefer to have the
fetcher automatically detect corrupt messages and skip them,
set this option to True. Default: False.
security_protocol (str): Protocol used to communicate with brokers.
Valid values are: PLAINTEXT, SSL. Default: PLAINTEXT.
ssl_context (ssl.SSLContext): Pre-configured SSLContext for wrapping
socket connections. If provided, all other ssl_* configurations
will be ignored. Default: None.
ssl_check_hostname (bool): Flag to configure whether ssl handshake
should verify that the certificate matches the brokers hostname.
Default: True.
ssl_cafile (str): Optional filename of ca file to use in certificate
verification. Default: None.
ssl_certfile (str): Optional filename of file in pem format containing
the client certificate, as well as any ca certificates needed to
establish the certificate's authenticity. Default: None.
ssl_keyfile (str): Optional filename containing the client private key.
Default: None.
ssl_password (str): Optional password to be used when loading the
certificate chain. Default: None.
ssl_crlfile (str): Optional filename containing the CRL to check for
certificate expiration. By default, no CRL check is done. When
providing a file, only the leaf certificate will be checked against
this CRL. The CRL can only be checked with Python 3.4+ or 2.7.+.
Default: None.
api_version (tuple): Specify which Kafka API version to use. If set to
None, the client will attempt to infer the broker version by probing
various APIs. Different versions enable different functionality. Examples:
(, ) enables full group coordination features with automatic
partition assignment and rebalancing,
(, , ) enables kafka-storage offset commits with manual
partition assignment only,
(, , ) enables zookeeper-storage offset commits with manual
partition assignment only,
(, , ) enables basic functionality but requires manual
partition assignment and offset management. Default: None
api_version_auto_timeout_ms (int): number of milliseconds to throw a
timeout exception from the constructor when checking the broker
api version. Only applies if api_version set to 'auto'
connections_max_idle_ms: Close idle connections after the number of
milliseconds specified by this config. The broker closes idle
connections after connections.max.idle.ms, so this avoids hitting
unexpected socket disconnected errors on the client.
Default: 540000 服务端可以接受的客户端的闲置时间。猜测如果客户端不向服务器发送任何信息(心跳等)超过这个时间后,就会被彻底断开
metric_reporters (list): A list of classes to use as metrics reporters.
Implementing the AbstractMetricsReporter interface allows plugging
in classes that will be notified of new metric creation. Default: []
metrics_num_samples (int): The number of samples maintained to compute
metrics. Default:
metrics_sample_window_ms (int): The maximum age in milliseconds of
samples used to compute metrics. Default:
selector (selectors.BaseSelector): Provide a specific selector
implementation to use for I/O multiplexing.
Default: selectors.DefaultSelector
exclude_internal_topics (bool): Whether records from internal topics
(such as offsets) should be exposed to the consumer. If set to True
the only way to receive records from an internal topic is
subscribing to it. Requires 0.10+ Default: True
sasl_mechanism (str): String picking sasl mechanism when security_protocol
is SASL_PLAINTEXT or SASL_SSL. Currently only PLAIN is supported.
Default: None
sasl_plain_username (str): Username for sasl PLAIN authentication.
Default: None
sasl_plain_password (str): Password for sasl PLAIN authentication.
Default: None
sasl_kerberos_service_name (str): Service name to include in GSSAPI
sasl mechanism handshake. Default: 'kafka'
sasl_kerberos_domain_name (str): kerberos domain name to use in GSSAPI
sasl mechanism handshake. Default: one of bootstrap servers Note:
Configuration parameters are described in more detail at
https://kafka.apache.org/documentation/#newconsumerconfigs
"""
kafak-python使用补充的更多相关文章
- PYTHON 100days学习笔记007-2:python数据类型补充(2)
目录 day007:python数据类型补充(2) 1.Python3 元组 1.1 访问元组 1.2 删除元组 1.3 元组运算符 1.4 元组索引,截取 1.5 元组内置函数 2.python3 ...
- PYTHON 100days学习笔记007-1:python数据类型补充(1)
目录 day007:python数据类型补充(1) 1.数字Number 1.1 Python 数字类型转换 1.2 Python 数字运算 1.3 数学函数 1.4 随机数函数 1.5 三角函数 1 ...
- python 知识点补充
python 知识点补充 简明 python 教程 r 或 R 来指定一个 原始(Raw) 字符串 Python 是强(Strongly)面向对象的,因为所有的一切都是对象, 包括数字.字符串与 函数 ...
- 7.python模块补充
此文章是对上节文章模块的补充 一,xml模块 xml是实现不同语言或程序之间进行数据交换的协议,可扩展标记语言,标准通用标记语言的子集,是一种用于标记电子文件使其具有结构性的标记语言.xml的格式如下 ...
- 3.python基础补充(集合,collection系列,深浅拷贝)
一.集合 1.集合(set): 把不同的元素组成一起形成集合,是python基本的数据类型.集合元素(set elements):组成集合的成员 python的set和其他语言类似, 是一个无序不重复 ...
- python模块补充
一.模块补充 configparser 1.基本的读取配置文件 -read(filename) 直接读取ini文件内容 -sections() 得到所有的section,并以列表的形式返回 -opti ...
- Python 面向对象-------补充
Python 面向对象 Python从设计之初就已经是一门面向对象的语言,正因为如此,在Python中创建一个类和对象是很容易的.本章节我们将详细介绍Python的面向对象编程. 如果你以前没有接触过 ...
- python基础补充
关于模块导入方式: import random print random.choice(range(10)) 和 from random import choice print choice(ra ...
- python列表补充、循环
优先掌握部分 切片l=['a','b','c','d','e','f']print(l[1:5])print(l[1:5:2])print(l[2:5])print(l[-1])了解print(l[- ...
- 我的Python笔记补充:入门知识拾遗
声明:本文整理借鉴金角大王的Python之路,Day1 - Python基础1,仅供本人学习使用!!! 入门知识拾遗 一.bytes类型 二.三元运算 1 result = 值1 if 条件 else ...
随机推荐
- 启动spark集群
启动Spark集群 spark@master $ ./sbin/start-all.sh 也可以一台一台启动,先启动 master spark@master $ ./sbin/start-master ...
- R语言编程艺术(4)R对数据、文件、字符串以及图形的处理
本文对应<R语言编程艺术> 第8章:数学运算与模拟: 第10章:输入与输出: 第11章:字符串操作: 第12章:绘图 =================================== ...
- 在Win10下搭建web服务器,使用本机IP不能访问,但是使用localhos或127.0.0.1可以正常访问的解决办法
最近在在Win10下搭建web服务器,发现通过windows自带的浏览器win10 edge浏览器使用本机IP不能放问,但是使用localhos或127.0.0.1可以正常访问, 后来无意发现,使用w ...
- SpringBoot详细研究-01基础
Springboot可以说是当前最火的java框架了,非常适合于"微服务"思路的开发,大幅缩短软件开发周期. 概念 过去Spring充满了配置bean的xml文件,随着spring ...
- @repository的含义,并且有时候却不用写,为什么?
//最后发现是这样的:@repository跟@Service,@Compent,@Controller这4种注解是没什么本质区别,都是声明作用,取不同的名字只是为了更好区分各自的功能.下图更多的作用 ...
- BeagleBone Black教程之BeagleBone Black使用到的Linux基础
BeagleBone Black教程之BeagleBone Black使用到的Linux基础 BeagleBone Black涉及到的Linux基础 在许多没有Linux相关经验的人看来,Linux看 ...
- 解决:虚拟机能ping通主机,主机ping不通虚拟机
问题:虚拟机能ping通主机,主机ping不通虚拟机 解决方法: 1. 使用桥接. 2. 关闭防火墙.
- HDU3693 Math Teacher's Homework ---- 数位DP
HDU3693 Math Teacher's Homework 一句话题意 给定$n, k以及m_1, m_2, m_3, ..., m_n$求$x_1 \oplus x_2 \oplus x_3 \ ...
- 使用Xcode打包上传APP
1.打开xcode,进入product->Scheme->EditScheme,找到Archive,最上面的设备选择IOSDevice,在BuildConfiguration中选中Rele ...
- Intel Code Challenge Final Round (Div. 1 + Div. 2, Combined) B. Batch Sort 暴力
B. Batch Sort 题目连接: http://codeforces.com/contest/724/problem/B Description output standard output Y ...