监控数据源

JMX RMI方式启动Broker,Consumer,Producer

-ea -Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.port=9996

通过JMX RMI方式连接

service:jmx:rmi:///jndi/rmi://127.0.0.1:9998/jmxrmi

监控数据

broker

bean name: kafka:type=kafka.SocketServerStats(每次启动都会清空这部分数据)

def getProduceRequestsPerSecond: Double
def getFetchRequestsPerSecond: Double
def getAvgProduceRequestMs: Double
def getMaxProduceRequestMs: Double
def getAvgFetchRequestMs: Double
def getMaxFetchRequestMs: Double
def getBytesReadPerSecond: Double
def getBytesWrittenPerSecond: Double
def getNumFetchRequests: Long
def getNumProduceRequests: Long
def getTotalBytesRead: Long
def getTotalBytesWritten: Long
def getTotalFetchRequestMs: Long
def getTotalProduceRequestMs: Long

bean name: kafka:type=kafka.BrokerAllTopicStat(每次启动都会清空这部分数据)
bean name: kafka:type=kafka.BrokerTopicStat.topic(每次启动都会清空这部分数据)

def getMessagesIn: Long  写入消息的数量
def getBytesIn: Long 写入的byte数量
def getBytesOut: Long 读出byte的数量
def getFailedProduceRequest: Long 失败的生产数量
def getFailedFetchRequest: Long 失败的读取操作数量

不是太重要的属性

bean name: kafka:type=kafka.LogFlushStats

def getFlushesPerSecond: Double
def getAvgFlushMs: Double
def getTotalFlushMs: Long
def getMaxFlushMs: Double
def getNumFlushes: Long

bean name: kafka:type=logs.topic-pattern

def getName: String    监控项目的名字,格式  topic+”-”+分区ID,比如 guoguo_t_1-0,guoguo_t_1-1
def getSize: Long 执久化文件的大小
def getNumberOfSegments: Int 执久化文件的数量
def getCurrentOffset: Long 基于当前写入kafka的文件的byte偏移量
def getNumAppendedMessages: Long 追加数据,每次重启清空

其它的需要监控的数据项目:

java堆(堆的内存使用情况,非堆的内存使用情况等)
GC信息(GC次数,GC总时间等)

consumer


消费者的状态
bean name: kafka:type=kafka.ConsumerStats

def getPartOwnerStats: String
比如:guoguo_t_1: [
{
0-1, // broker+分区的信息
fetchoffset: 58246, 取的offset,已经消费的offset
consumeroffset: 58246
}{ 0-0, fetchoffset: 2138747,consumeroffset: 2138747}]
def getConsumerGroup: String 消费者的组,比如guoguo_group_1
def getOffsetLag(topic: String, brokerId: Int, partitionId: Int): Long 有多少byte消息没有读取
def getConsumedOffset(topic: String, brokerId: Int, partitionId: Int): Long 已经消费了多少byte的数据
def getLatestOffset(topic: String, brokerId: Int, partitionId: Int): Long

bean name: kafka:type=kafka.ConsumerAllTopicStat (所有topic的数据的汇总,重启数据也会被清空)

kafka:type=kafka.ConsumerTopicStat.topic(重启数据也会被清空)

def getMessagesPerTopic: Long
def getBytesPerTopic: Long

bean name: kafka:type=kafka.SimpleConsumerStats

def getFetchRequestsPerSecond: Double 每秒种发起的取数据请求数
def getAvgFetchRequestMs: Double 平均取数据请求用的ms数
def getMaxFetchRequestMs: Double 最大取数据请求用的ms数
def getNumFetchRequests: Long 取数据请求的数量
def getConsumerThroughput: Double 消费者的吞吐量,字节每秒

Producer

bean name: kafka:type=kafka.KafkaProducerStats

def getProduceRequestsPerSecond: Double
def getAvgProduceRequestMs: Double
def getMaxProduceRequestMs: Double
def getNumProduceRequests: Long

bean name: kafka.producer.Producer:type=AsyncProducerStats

def getAsyncProducerEvents: Int (发出消息数量,与所有消费者的getMessagesPerTopic值相关不应太大)
def getAsyncProducerDroppedEvents: Int

Demo程序

package com.campaign.kafka

import javax.management._
import kafka.log.LogStatsMBean
import kafka.network.SocketServerStatsMBean
import kafka.server.BrokerTopicStatMBean
import javax.management.openmbean.CompositeData
import java.lang.management.{MemoryUsage, GarbageCollectorMXBean}
import javax.management.remote.{JMXConnector, JMXConnectorFactory, JMXServiceURL} /**
* Created by jiaguotian on 14-1-13.
*/
15object RmiMonitor {
def main(args: Array[String]) {
val jmxUrl: JMXServiceURL = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://127.0.0.1:9999/jmxrmi")
val connector: JMXConnector = JMXConnectorFactory.connect(jmxUrl)
val mBeanServerconnection: MBeanServerConnection = connector.getMBeanServerConnection val domains: Array[String] = mBeanServerconnection.getDomains
println("domains:")
for (domain <- domains) {
println("%25s: %s".format("domain", domain))
} println("-------------------------------")
val beanSet: java.util.Set[ObjectInstance] = mBeanServerconnection.queryMBeans(null, null)
val beans: Array[ObjectInstance] = beanSet.toArray(new Array[ObjectInstance](0)).sortWith((o1, o2) => o1.getClassName.compare(o2.getClassName) < 0)
for (instance <- beans) {
val objectName: ObjectName = instance.getObjectName
println("%s %s".format(instance.getClassName, objectName))
} println("-------------------------------") {
val instance: ObjectName = ObjectName.getInstance("kafka:type=kafka.SocketServerStats")
val bean: SocketServerStatsMBean = MBeanServerInvocationHandler.newProxyInstance(mBeanServerconnection,
instance,
classOf[SocketServerStatsMBean],
true)
println(instance.getCanonicalKeyPropertyListString)
println("%25s: %s".format("AvgFetchRequestMs", bean.getAvgFetchRequestMs))
println("%25s: %s".format("AvgProduceRequestMs", bean.getAvgProduceRequestMs))
println("%25s: %s".format("BytesReadPerSecond", bean.getBytesReadPerSecond))
println("%25s: %s".format("BytesWrittenPerSecond", bean.getBytesWrittenPerSecond))
println("%25s: %s".format("FetchRequestsPerSecond", bean.getFetchRequestsPerSecond))
println("%25s: %s".format("MaxFetchRequestMs", bean.getMaxFetchRequestMs))
println("%25s: %s".format("MaxProduceRequestMs", bean.getMaxProduceRequestMs))
println("%25s: %s".format("NumFetchRequests", bean.getNumFetchRequests))
println("%25s: %s".format("NumProduceRequests", bean.getNumProduceRequests))
println("%25s: %s".format("ProduceRequestsPerSecond", bean.getProduceRequestsPerSecond))
}
println("-------------------------------");
{
val objNames: java.util.Set[ObjectName] = mBeanServerconnection.queryNames(
ObjectName.getInstance("java.lang:type=Memory*"), null)
val array: Array[ObjectName] = objNames.toArray(new Array[ObjectName](0))
for (name <- array) {
val info: _root_.javax.management.MBeanInfo = mBeanServerconnection.getMBeanInfo(name)
val attrInfos: Array[_root_.javax.management.MBeanAttributeInfo] = info.getAttributes
println(name.toString)
for (info <- attrInfos) {
println(info.getName + " " + info.getType)
info.getType match {
case "javax.management.openmbean.CompositeData" =>
val attribute: AnyRef = mBeanServerconnection.getAttribute(name, info.getName)
val bean: MemoryUsage = MemoryUsage.from(attribute.asInstanceOf[CompositeData])
println("%25s: %s".format("Committed", bean.getCommitted()))
println("%25s: %s".format("Init", bean.getInit()))
println("%25s: %s".format("Max", bean.getMax()))
println("%25s: %s".format("Used", bean.getUsed()))
case _ =>
}
}
}
}
println("-------------------------------") {
val objNames: java.util.Set[ObjectName] = mBeanServerconnection.queryNames(
ObjectName.getInstance("java.lang:type=GarbageCollector,name=*"), null)
val array: Array[ObjectName] = objNames.toArray(new Array[ObjectName](0))
for (next <- array) {
val bean: GarbageCollectorMXBean = MBeanServerInvocationHandler.newProxyInstance(mBeanServerconnection, next, classOf[GarbageCollectorMXBean], true)
println("%25s: %s".format("Name", bean.getName()))
println("%25s: %s".format("MemoryPoolNames", bean.getMemoryPoolNames()))
println("%25s: %s".format("ObjectName", bean.getObjectName()))
println("%25s: %s".format("Class", bean.getClass()))
println("%25s: %s".format("CollectionCount", bean.getCollectionCount()))
println("%25s: %s".format("CollectionTime", bean.getCollectionTime()))
}
} val TypeValuePattern = "(.*):(.*)=(.*)".r
val kafka1: ObjectName = new ObjectName("kafka", "type", "*")
val kafka: java.util.Set[ObjectInstance] = mBeanServerconnection.queryMBeans(kafka1, null)
val kafkas: Array[ObjectInstance] = kafka.toArray(new Array[ObjectInstance](0)).sortWith((o1, o2) => o1.getClassName.compare(o2.getClassName) < 0)
for (instance <- kafkas) {
val objectName: ObjectName = instance.getObjectName
println(instance.getClassName + " " + objectName) objectName.getCanonicalName match {
case TypeValuePattern(domain, t, v) =>
instance.getClassName match {
case "kafka.log.LogStats" =>
val oName: ObjectName = new ObjectName(domain, t, v)
val bean: LogStatsMBean = MBeanServerInvocationHandler.newProxyInstance(mBeanServerconnection, oName, classOf[LogStatsMBean], true)
println("%25s: %s".format("CurrentOffset", bean.getCurrentOffset))
println("%25s: %s".format("Name", bean.getName()))
println("%25s: %s".format("NumAppendedMessages", bean.getNumAppendedMessages))
println("%25s: %s".format("NumberOfSegments", bean.getNumberOfSegments))
println("%25s: %s".format("Size", bean.getSize()))
case "kafka.message.LogFlushStats" =>
val oName: ObjectName = new ObjectName(domain, t, v)
val bean: LogStatsMBean = MBeanServerInvocationHandler.newProxyInstance(mBeanServerconnection, oName, classOf[LogStatsMBean], true)
println("%25s: %s".format("CurrentOffset", bean.getCurrentOffset))
println("%25s: %s".format("Name", bean.getName()))
println("%25s: %s".format("NumAppendedMessages", bean.getNumAppendedMessages))
println("%25s: %s".format("NumberOfSegments", bean.getNumberOfSegments))
println("%25s: %s".format("Size", bean.getSize()))
case "kafka.network.SocketServerStats" =>
val oName: ObjectName = new ObjectName(domain, t, v)
val bean: SocketServerStatsMBean = MBeanServerInvocationHandler.newProxyInstance(mBeanServerconnection, oName, classOf[SocketServerStatsMBean], true)
println("%25s: %s".format("BytesReadPerSecond", bean.getBytesReadPerSecond))
println("%25s: %s".format("AvgFetchRequestMs", bean.getAvgFetchRequestMs))
println("%25s: %s".format("AvgProduceRequestMs", bean.getAvgProduceRequestMs))
println("%25s: %s".format("BytesWrittenPerSecond", bean.getBytesWrittenPerSecond))
println("%25s: %s".format("FetchRequestsPerSecond", bean.getFetchRequestsPerSecond))
println("%25s: %s".format("MaxFetchRequestMs", bean.getMaxFetchRequestMs))
println("%25s: %s".format("MaxProduceRequestMs", bean.getMaxProduceRequestMs))
println("%25s: %s".format("NumFetchRequests", bean.getNumFetchRequests))
println("%25s: %s".format("NumProduceRequests", bean.getNumProduceRequests))
println("%25s: %s".format("ProduceRequestsPerSecond", bean.getProduceRequestsPerSecond))
println("%25s: %s".format("TotalBytesRead", bean.getTotalBytesRead))
case "kafka.server.BrokerTopicStat" =>
val oName: ObjectName = new ObjectName(domain, t, v)
val bean: BrokerTopicStatMBean = MBeanServerInvocationHandler.newProxyInstance(mBeanServerconnection, oName, classOf[BrokerTopicStatMBean], true)
println("%25s: %s".format("BytesIn", bean.getBytesIn))
println("%25s: %s".format("BytesOut", bean.getBytesOut))
println("%25s: %s".format("FailedFetchRequest", bean.getFailedFetchRequest))
println("%25s: %s".format("FailedProduceRequest", bean.getFailedProduceRequest))
println("%25s: %s".format("MessagesIn", bean.getMessagesIn))
case _ =>
}
case _ =>
}
}
}
}

输出结果

domains:
domain: JMImplementation
domain: com.sun.management
domain: kafka
domain: java.nio
domain: java.lang
domain: java.util.logging
-------------------------------
com.sun.management.UnixOperatingSystem java.lang:type=OperatingSystem
javax.management.MBeanServerDelegate JMImplementation:type=MBeanServerDelegate
kafka.log.LogStats kafka:type=kafka.logs.guoguo_t_1-1
kafka.log.LogStats kafka:type=kafka.logs.guoguo_t_1-0
kafka.network.SocketServerStats kafka:type=kafka.SocketServerStats
kafka.utils.Log4jController kafka:type=kafka.Log4jController
sun.management.ClassLoadingImpl java.lang:type=ClassLoading
sun.management.CompilationImpl java.lang:type=Compilation
sun.management.GarbageCollectorImpl java.lang:type=GarbageCollector,name=ConcurrentMarkSweep
sun.management.GarbageCollectorImpl java.lang:type=GarbageCollector,name=ParNew
sun.management.HotSpotDiagnostic com.sun.management:type=HotSpotDiagnostic
sun.management.ManagementFactoryHelper$1 java.nio:type=BufferPool,name=direct
sun.management.ManagementFactoryHelper$1 java.nio:type=BufferPool,name=mapped
sun.management.ManagementFactoryHelper$PlatformLoggingImpl java.util.logging:type=Logging
sun.management.MemoryImpl java.lang:type=Memory
sun.management.MemoryManagerImpl java.lang:type=MemoryManager,name=CodeCacheManager
sun.management.MemoryPoolImpl java.lang:type=MemoryPool,name=Par Survivor Space
sun.management.MemoryPoolImpl java.lang:type=MemoryPool,name=CMS Perm Gen
sun.management.MemoryPoolImpl java.lang:type=MemoryPool,name=Par Eden Space
sun.management.MemoryPoolImpl java.lang:type=MemoryPool,name=Code Cache
sun.management.MemoryPoolImpl java.lang:type=MemoryPool,name=CMS Old Gen
sun.management.RuntimeImpl java.lang:type=Runtime
sun.management.ThreadImpl java.lang:type=Threading
-------------------------------
type=kafka.SocketServerStats
getAvgFetchRequestMs: 0.0
getAvgProduceRequestMs: 0.0
getBytesReadPerSecond: 0.0
getBytesWrittenPerSecond: 0.0
getFetchRequestsPerSecond: -0.0
getMaxFetchRequestMs: 0.0
getMaxProduceRequestMs: 0.0
getNumFetchRequests: 0
getNumProduceRequests: 0
getProduceRequestsPerSecond: -0.0
-------------------------------
java.lang:type=Memory
HeapMemoryUsage javax.management.openmbean.CompositeData
getCommitted: 3194421248
getInit: 3221225472
getMax: 3194421248
getUsed: 163302248
NonHeapMemoryUsage javax.management.openmbean.CompositeData
getCommitted: 24313856
getInit: 24313856
getMax: 136314880
getUsed: 14854816
ObjectPendingFinalizationCount int
Verbose boolean
ObjectName javax.management.ObjectName
-------------------------------
getName: ParNew
getMemoryPoolNames: [Ljava.lang.String;@23652209
getObjectName: java.lang:type=GarbageCollector,name=ParNew
getClass: class com.sun.proxy.$Proxy1
getCollectionCount: 0
getCollectionTime: 0
getName: ConcurrentMarkSweep
getMemoryPoolNames: [Ljava.lang.String;@2c61bbb7
getObjectName: java.lang:type=GarbageCollector,name=ConcurrentMarkSweep
getClass: class com.sun.proxy.$Proxy1
getCollectionCount: 0
getCollectionTime: 0
kafka.log.LogStats kafka:type=kafka.logs.guoguo_t_1-1
CurrentOffset: 5519897
Name: guoguo_t_1-1
NumAppendedMessages: 0
NumberOfSegments: 1
Size: 5519897
kafka.log.LogStats kafka:type=kafka.logs.guoguo_t_1-0
CurrentOffset: 7600338
Name: guoguo_t_1-0
NumAppendedMessages: 0
NumberOfSegments: 1
Size: 7600338
kafka.network.SocketServerStats kafka:type=kafka.SocketServerStats
BytesReadPerSecond: 0.0
AvgFetchRequestMs: 0.0
AvgProduceRequestMs: 0.0
BytesWrittenPerSecond: 0.0
FetchRequestsPerSecond: -0.0
MaxFetchRequestMs: 0.0
MaxProduceRequestMs: 0.0
NumFetchRequests: 0
NumProduceRequests: 0
ProduceRequestsPerSecond: -0.0
TotalBytesRead: 0
kafka.utils.Log4jController kafka:type=kafka.Log4jController

使用JMX监控Kafka的更多相关文章

  1. 如何使用JMX监控Kafka

    使用kafka做消息队列中间件时,为了实时监控其性能时,免不了要使用jmx调取kafka broker的内部数据,不管是自己重新做一个kafka集群的监控系统,还是使用一些开源的产品,比如yahoo的 ...

  2. 使用kafka-eagle监控Kafka

    # 监控kafka集群,开启监控趋势图使用 # 有一个问题,需要在kafka-server-start.sh文件中配置端口,有如下三种办法 # 第一种:复制并修改kafka目录,比如kafka-1,k ...

  3. Prometheus+Grafana通过kafka_exporter监控kafka

    Prometheus+Grafana通过kafka_exporter监控kafka 一.暴露 kafka-metric 方式 二.jmx_exporter方式 2.1 下载jmx_prometheus ...

  4. ActiveMQ笔记(5):JMX监控

    系统上线运行后,及时监控报警是很必要的手段,对于ActiveMQ而言,主要监控的指标有:MQ本身的健康状况.每个队列的生产者数量.消费者数量.队列的当前消息数等. ActiveMQ支持JMX监控,使用 ...

  5. Tomcat调优及JMX监控

    Tomcat调优及JMX监控 实验背景 ====================================================== 系统版本:CentOS release 6.5 ( ...

  6. Kafka 消息监控 - Kafka Eagle

    1.概述 在开发工作当中,消费 Kafka 集群中的消息时,数据的变动是我们所关心的,当业务并不复杂的前提下,我们可以使用 Kafka 提供的命令工具,配合 Zookeeper 客户端工具,可以很方便 ...

  7. Jetty服务器jmx监控

    Jetty服务器jmx监控 Jetty 服务器增加jmx,jmx-remote模块 1.修改对应jetty服务器的配置文件start.ini追加如下两行–module=jmx–module=jmx-r ...

  8. zabbix监控tomcat(使用jmx监控,但不使用系统自带模版)

    一,zabbx使用jmx监控tomcat的原理分析 1.Zabbix-Server找Zabbix-Java-Gateway获取Java数据 2.Zabbix-Java-Gateway找Java程序(j ...

  9. zabbix使用jmx监控tomcat

    zabbix监控Tomcat/JVM实例性能(115) – 运维生存时间http://www.ttlsa.com/zabbix/zabbix-use-jmx-monitor-tomcat/ zabbi ...

随机推荐

  1. 7.12归来赛_B

    Prime Judge 时间限制 1000 ms 内存限制 65536 KB 题目描写叙述 众所周知.假设一个正整数仅仅能被1和自身整除,那么该数被称为素数.题目的任务非常easy.就是判定一个数是否 ...

  2. Android实现换肤功能(一)

    上周有个朋友给建议说讲讲换肤吧,真巧这周公司的工作安排也有这个需求,换的地方之多之繁,让人伤神死了.正所谓磨刀不误砍柴工,先磨下刀,抽出一个工具类,写了个关于换肤的简单demo. Android中换肤 ...

  3. CSS经验库

    1.兼容360浏览器 字体大小设置 开发中需要使用em单位 font-size: 0.83em; font-family: "Arial"; -webkit-text-size-a ...

  4. 总结文件操作函数(一)-C语言

    在进程一開始执行,就自己主动打开了三个相应设备的文件.它们是标准输入.输出.错误流.分别用全局文件指针stdin.stdout.stderr表示,相应的文件描写叙述符为0.1.2:stdin具有可读属 ...

  5. xgboost 自定义目标函数和评估函数

    https://zhpmatrix.github.io/2017/06/29/custom-xgboost/ https://www.cnblogs.com/silence-gtx/p/5812012 ...

  6. 计算机系统监控 PerformanceCounter

    PerformanceCounter 컴퓨터 성능 머니터링 CUP Processor 메모리 하터웨어 DB (CPU,User Connection,Batch Request,Blocking ...

  7. 深入研究memcache 特性和限制

    深入研究memcache 特性和限制在 Memcached中可以保存的item数据量是没有限制的,只要内存足够 .Memcached 单进程最大使用内存为2G,要使用更多内存,可以分多个端口开启多个M ...

  8. rabbitmq文章源

    网易杭研后台技术中心的博客 rabbitmq topic简单demo http://blog.csdn.net/cugb1004101218/article/details/21243927?utm_ ...

  9. Supervisord进程管理工具

    进程管理工具Supervisord Posted on 2014/06/17 by admin Supervisord 上面已经介绍了Go目前是有两种方案来实现他的daemon,但是官方本身还不支持这 ...

  10. 竞赛图的得分序列 (SRM 717 div 1 250)

    SRM 717 DIV 1 中 出了这样一道题: 竞赛图就是把一个无向完全图的边定向后得到的有向图,得分序列就是每个点的出度构成的序列. 给出一个合法的竞赛图出度序列, 要求构造出原图(原题是求(u, ...