[转帖]Welcome to the di-kafkameter wiki!
https://github.com/rollno748/di-kafkameter/wiki#producer-elements
Introduction
DI-Kafkameter is a JMeter plugin that allows you to test and measure the performance of Apache Kafka.
Components
The DI-Kafkameter comprises of 2 components, which are
- Producer Component
- Kafka Producer Config
- Kafka Producer Sampler
- Consumer Component
- Kafka Consumer Config
- Kafka Consumer Sampler
Producer Component (Publish a Message to a Topic)
To publish/send a message to a Kafka topic you need to add producer components to the testplan.
- The Kafka Producer config is responsible to hold the connection information, which includes security and other properties required to talk to the broker.
- The Kakfa Producer Sampler helps to send messages to the topic with the connection established using Config element.
Right click on Test Plan -> Add -> Config Element -> Kafka Producer Config
Provide a Variable name to export the connection object (Which will be used in Sampler element)
Provide the Kafka connection configs (list of Brokers with comma separated)
Provide a Client ID (Make it unique, to define where you sending the message from)
Select the right security to connect to brokers (This will be completely based on how Kafka security is defined)
For JAAS Security, You need to add the below key and value to the Additional Properties
Config key: sasl.jaas.config
Config value: org.apache.kafka.common.security.scram.ScramLoginModule required username="<USERNAME>" password="<PASSWORD>";
Right click on Test Plan -> Add -> Sampler -> Kafka Producer Sampler
Use the same Variable name which was defined in the config element
Define the topic name where you want to send the message (Case sensitive)
Kafka Message - The Original message which needs to be pushed to the topic
Partition String (Optional) - This option helps you to post messages to particular partition by providing the partition number
Message Headers (Optional) - This helps in adding headers to the messages which are being pushed (Supports more than one header)
Consumer Component (Read Message from a topic)
To Consume/Read a message from a Kafka topic you need to add Consumer components to the testplan.
- The Kafka Consumer config is responsible to hold the connection information, which includes security and other properties required to talk to the broker.
- The Kafka Consumer Sampler helps to read messages from the topic with the connection established using Config element.
Right click on Test Plan -> Add -> Config Element -> Kafka Consumer Config
Provide a Variable name to export the connection object (Which will be used in Sampler element)
Provide the Kafka connection configs (list of Brokers with comma separated)
Provide a Group ID (Make it unique, to define the group your consumer belongs to)
Define the topic name where you want to send the message (Case sensitive)
No Of Messages to Poll - This allows you to define the number of messages to read within a request (Defaults to 1)
Select the right security to connect to brokers (This will be completely based on how Kafka security is defined)
Auto Commit - This will set the offset as read, once the message is consumed
Select the right security to connect to brokers (This will be completely based on how Kafka security is defined)
For JAAS Security, You need to add the below key and value to the Additional Properties
Config key: sasl.jaas.config
Config value: org.apache.kafka.common.security.scram.ScramLoginModule required username="<USERNAME>" password="<PASSWORD>";
Right click on Test Plan -> Add -> Sampler -> Kafka Consumer Sampler
Use the same Variable name which was defined in the config element
Poll timeout - This helps to set the polling timeout for consumer to read from topic (Defaults to 100 ms)
Commit Type - Defines the Commit type (Sync/Async)
Producer Properties
Supported Producer properties which can be added to Additional Properties field.
Property | Available Options | Default |
---|---|---|
acks | [0, 1, -1] | 1 |
batch.size | positive integer | 16384 |
bootstrap.servers | comma-separated host:port pairs | localhost:9092 |
buffer.memory | positive long | 33554432 |
client.id | string | "" |
compression.type | [none, gzip, snappy, lz4, zstd] | none |
connections.max.idle.ms | positive long | 540000 |
delivery.timeout.ms | positive long | 120000 |
enable.idempotence | [true, false] | false |
interceptor.classes | fully-qualified class names | [] |
key.serializer | fully-qualified class name | org.apache.kafka.common.serialization.StringSerializer |
linger.ms | non-negative integer | 0 |
max.block.ms | non-negative long | 60000 |
max.in.flight.requests.per.connection | positive integer | 5 |
max.request.size | positive integer | 1048576 |
metadata.fetch.timeout.ms | positive long | 60000 |
metadata.max.age.ms | positive long | 300000 |
partitioner.class | fully-qualified class name | org.apache.kafka.clients.producer.internals.DefaultPartitioner |
receive.buffer.bytes | positive integer | 32768 |
reconnect.backoff.ms | non-negative long | 50 |
request.timeout.ms | positive integer | 30000 |
retries | non-negative integer | 0 |
sasl.jaas.config | string | null |
sasl.kerberos.kinit.cmd | string | /usr/bin/kinit |
sasl.kerberos.min.time.before.relogin | positive long | 60000 |
sasl.kerberos.service.name | string | null |
sasl.mechanism | [GSSAPI, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512] | GSSAPI |
security.protocol | [PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL] | PLAINTEXT |
sender.flush.timeout.ms | non-negative long | 0 |
send.buffer.bytes | positive integer | 131072 |
value.serializer | fully-qualified class name | org.apache.kafka.common.serialization.StringSerializer |
Consumer Properties
Supported Consumer properties which can be added to Additional Properties field.
Property | Available Options | Default |
---|---|---|
auto.commit.interval.ms | positive integer | 5000 |
auto.offset.reset | [earliest, latest, none] | latest |
bootstrap.servers | comma-separated host:port pairs | localhost:9092 |
check.crcs | [true, false] | true |
client.id | string | "" |
connections.max.idle.ms | positive long | 540000 |
enable.auto.commit | [true, false] | true |
exclude.internal.topics | [true, false] | true |
fetch.max.bytes | positive long | 52428800 |
fetch.max.wait.ms | non-negative integer | 500 |
fetch.min.bytes | non-negative integer | 1 |
group.id | string | "" |
heartbeat.interval.ms | positive integer | 3000 |
interceptor.classes | fully-qualified class names | [] |
isolation.level | [read_uncommitted, read_committed] | read_uncommitted |
key.deserializer | fully-qualified class name | org.apache.kafka.common.serialization.StringDeserializer |
max.partition.fetch.bytes | positive integer | 1048576 |
max.poll.interval.ms | positive long | 300000 |
max.poll.records | positive integer | 500 |
metadata.max.age.ms | positive long | 300000 |
metadata.fetch.timeout.ms | positive long | 60000 |
receive.buffer.bytes | positive integer | 32768 |
reconnect.backoff.ms | non-negative long | 50 |
request.timeout.ms | positive integer | 30000 |
retry.backoff.ms | non-negative long | 100 |
sasl.jaas.config | string | null |
sasl.kerberos.kinit.cmd | string | /usr/bin/kinit |
sasl.kerberos.min.time.before.relogin | positive long | 60000 |
sasl.kerberos.service.name | string | null |
sasl.mechanism | [GSSAPI, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512] | GSSAPI |
security.protocol | [PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL] | PLAINTEXT |
send.buffer.bytes | positive integer | 131072 |
session.timeout.ms | positive integer | 10000 |
value.deserializer | fully-qualified class name | org.apache.kafka.common.serialization.StringDeserializer |
[转帖]Welcome to the di-kafkameter wiki!的更多相关文章
- 用Zim替代org-mode?
三年前我玩过Zim,当时还写了一篇<Zim - 普通人的Org-mode>,当时还说我还是会坚持使用emacs org-mode.但最近我又在考虑是不是回头用Zim来写博客文章.整理知识库 ...
- jmeter取样器之KafkaProducerSampler(往kafka插入数据)
项目背景 性能测试场景中有一个业务场景的数据抽取策略是直接使用kafka队列,该场景需要准备的测试数据是kafka队列里的数据,故需要实现插入数据到kafka队列,且需要实现控制每分钟插入多少条数据. ...
- 【好文转帖】控制反转(IOC)和依赖注入(DI)的区别
IOC inversion of control 控制反转 DI Dependency Injection 依赖注入 要理解这两个概念,首先要搞清楚以下几个问题: 参与者都有谁? 依赖:谁 ...
- [转帖]什么是IOC(控制反转)、DI(依赖注入)
什么是IOC(控制反转).DI(依赖注入) 2018-08-22 21:29:13 Ming339456 阅读数 20642 原文地址(摘要了部分内容):https://blog.csdn.net ...
- Ninject之旅之一:理解DI
摘要: DI(IoC)是当前软件架构设计中比较时髦的技术.DI(IoC)可以使代码耦合性更低,更容易维护,更容易测试.现在有很多开源的依赖反转的框架,Ninject是其中一个轻量级开源的.net DI ...
- [Android分享] 【转帖】Android ListView的A-Z字母排序和过滤搜索功能
感谢eoe社区的分享 最近看关于Android实现ListView的功能问题,一直都是小伙伴们关心探讨的Android开发问题之一,今天看到有关ListView实现A-Z字母排序和过滤搜索功能 ...
- MVC 5 + EF6 完整教程15 -- 使用DI进行解耦
如果大家研究一些开源项目,会发现无处不在的DI(Dependency Injection依赖注入). 本篇文章将会详细讲述如何在MVC中使用Ninject实现DI 文章提纲 场景描述 & 问题 ...
- [转帖][分享] 关于系统DIY--by 原罪
http://wuyou.net/forum.php?mod=viewthread&tid=399277&extra=page%3D1 前几天我发了一个帖子<Windows组件w ...
- AngularJs学习笔记--Dependency Injection(DI,依赖注入)
原版地址:http://code.angularjs.org/1.0.2/docs/guide/di 一.Dependency Injection(依赖注入) 依赖注入(DI)是一个软件设计模式,处理 ...
- 原创:Javascript DI!Angular依赖注入的实现原理
DI是Angular的特色功能,而在Angular 2.0的计划中,DI将成为一个独立的模块,参见 https://github.com/angular/di.js 这意味着它也有机会被用于nodej ...
随机推荐
- 【Python】【OpenCV】OCR识别(二)——透视变换
对于OCR技术在处理有角度有偏差的图像时是比较困难的,而水平的图像使用OCR识别准确度会高很多,因为文本通常是水平排列的,而OCR算法一般会假设文本是水平的. 针对上述情况,所以我们在处理有角度的图象 ...
- 日常Bug排查-集群逐步失去响应
前言 日常Bug排查系列都是一些简单Bug排查.笔者将在这里介绍一些排查Bug的简单技巧,同时顺便积累素材_ Bug现场 最近碰到一个产线问题,表现为某个应用集群所有的节点全部下线了.导致上游调用全部 ...
- 使用 C# 在Word中插入图表
Word中的图表功能将数据可视化地呈现在文档中.这为展示数据和进行数据分析提供了一种方便且易于使用的工具,使作者能够以直观的方式传达信息.要通过C#代码来实现在Word中绘制图表,可以借助 Spire ...
- 15年了,我们到底怎样才能用好 Serverless?
摘要:Serverless能够给企业客户和开发者带来非常直观的收益,包括成本节约和效率提升. 作者:冯嘉 一.Serverless发展历程及现状 1.1.Serverless概念 通常意义上来讲,Se ...
- 跟我学丨如何用鲲鹏服务器搭建Hadoop全分布式集群
摘要:今天教大家如何利用鲲鹏服务器搭建Hadoop全分布式集群,动起来··· 一.Hadoop常见的三种运行模式 1.单机模式(独立模式)(Local或Standalone Mode) 默认情况下Ha ...
- Python图像处理丨两种实现图像形态学转化运算
摘要:本篇文章主要讲解Python调用OpenCV实现图像形态学转化,包括图像顶帽运算和图像黑帽运算. 本文分享自华为云社区<[Python图像处理] 十.形态学之图像顶帽运算和黑帽运算> ...
- 【“互联网+”大赛华为云赛道】CloudIDE命题攻略:明确业务场景,快速开发插件
摘要:基于华为云CloudIDE和插件开发框架自行设计并开发插件. IDE是每个开发人员必备的生产工具,一款好的IDE + 插件的组合,除了帮助开发者把编写代码.组织项目.编译运行放在一个环境中外,还 ...
- 带你聚焦GaussDB(DWS)存储时游标使用
摘要:游标是一种数据处理方法,提供了在查询结果集中进行逐行遍历浏览数据的方法,也可以将游标当做上下文区域的句柄或者指针,借助游标对指定位置的数据进行查询与处理. 本文分享自华为云社区<Gauss ...
- No Feign Client for loadBalancing defined. Did you forget to include spring-cloud-starter-loadbalanc
No Feign Client for loadBalancing defined. Did you forget to include spring-cloud-starter-loadbalanc ...
- AnaConda 虚拟环境创建失败的解决方案
问题:创建环境时,AnaConda界面下放一直显示正在创建中,然后过几分钟报错! 我的解决方法:--关闭 VPN... 其他解决方案请参考这篇文章:Here