In this blog post I will show you kafka integration with ganglia, this is very interesting & important topic for those who want to do bench-marking, measure performance by monitoring specific Kafka metrics via ganglia.

Before going ahead let me briefly explain about what is Kafka and Ganglia.

Kafka – Kafka is open source distributed message broker project developed by Apache, Kafka provides a unified, high-throughput, low-latency platform for handling real-time data feeds.

Ganglia – Ganglia is distributed system for monitoring high performance computing systems such as grids, clusters etc.

Now lets get started, In this example we have a Hadoop cluster with 3 Kafka brokers, First we will see how to install and configure ganglia on these machines.

Step 1: Setup and Configure Ganglia gmetad and gmond

First thing is you need to install EPEL repo on all the nodes

yum install epel-release

On master node (ganglia-server) download below packages

yum install rrdtool ganglia ganglia-gmetad ganglia-gmond ganglia-web httpdphpaprapr-util

On slave nodes (ganglia-client) download below packages

yum install ganglia-gmond

On master node do the following

chown apache:apache -R /var/www/html/ganglia

Edit below config file and allow ganglia webpage from any IP

vi /etc/httpd/conf.d/ganglia.conf

It should look like below:

#
# Ganglia monitoring system php web frontend
#
Alias /ganglia /usr/share/ganglia
<Location /ganglia>
Order deny,allow
Allow from all                    #this is very important or else you won’t be able to see ganglia web UI
Allow from 127.0.0.1
Allow from ::1
# Allow from .example.com
</Location>

On master node edit gmetadconfig file and it should look like below (Please change highlighted IP address to your ganglia-server private IP address)

#cat /etc/ganglia/gmetad.conf |grep -v ^#
data_source "hadoopkafka" 172.30.0.81:8649
gridname "Hadoop-Kafka"
setuid_username ganglia
case_sensitive_hostnames 0

On master node edit gmond.conf, keep other parameters to default except below ones

Copy gmond.conf to all other nodes in the cluster

cluster {
name = "hadoopkafka"
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}
/* The host section describes attributes of the host, like the location */
host {
location = "unspecified"
}
/* Feel free to specify as many udp_send_channels as you like. Gmond
used to only support having a single channel */
udp_send_channel {
#bind_hostname = yes # Highly recommended, soon to be default.
                       # This option tells gmond to use a source address
                       # that resolves to the machine's hostname. Without
                       # this, the metrics may appear to come from any
                       # interface and the DNS names associated with
                       # those IPs will be used to create the RRDs.
#mcast_join = 239.2.11.71
host = 172.30.0.81
port = 8649
#ttl = 1
}
/* You can specify as many udp_recv_channels as you like as well. */
udp_recv_channel {
#mcast_join = 239.2.11.71
port = 8649
#bind = 239.2.11.71
#retry_bind = true
# Size of the UDP buffer. If you are handling lots of metrics you really
# should bump it up to e.g. 10MB or even higher.
# buffer = 10485760
}

Start apache service on master node

service httpd start

Start gmetad service on master node

service gmetad start

Start gmond service on every node in the server

service gmond start

This is it!  Now you can see basic ganglia metrics by visiting web UI at http://IP-address-of-ganglia-server/ganglia

Step 2: Ganglia Integration with Kafka

Enable JMX Monitoring for Kafka Brokers

In order to get custom Kafka metrics we need to enable JMX monitoring for Kafka Broker Daemon.

To enable JMX Monitoring for Kafka broker, please follow below instructions:

Edit kafka-run-class.sh and modify KAFKA_JMX_OPTS variable like below (please replace red text with your Kafka Broker hostname)

KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka.broker.hostname -Djava.net.preferIPv4Stack=true"

Add below line in kafka-server-start.sh (in case of Hortonworks hadoop, path is /usr/hdp/current/kafka-broker/bin/kafka-server-start.sh)

export JMX_PORT=${JMX_PORT:-9999}

That’s it! Please do the above steps on all Kafka brokers and restart the kafka brokers ( manually or via management UI whatever applicable)

Verify that JMX port has been enabled!

 You can use jconsole to do so.

Download, install and configure jmxtrans

Download jmxtrans rpm from below link and install it using rpm command

http://code.google.com/p/jmxtrans/downloads/detail?name=jmxtrans-250-0.noarch.rpm&can=2&q=

Once you have installed jmxtrans, please make sure that java &jps configured in $PATH variable

Write a JSON for fetching MBeans on each Kafka Broker.

I have written JSON for monitoring custom Kafka metrics, please download it from here.

Please note that, you need to replace “IP_address_of_kafka_broker” with your kafka broker’s IP address in downloaded JSON, same is the case for ganglia server’s IP address.

Once you are done with writing JSON, please verify the syntax using any online JSON validator( http://jsonlint.com/ ).

Start the jmxtrans using below command

cd /usr/share/jmxtrans/
sh jmxtrans.sh start $name-of-the-json-file

Verify that jmxtrans has started successfully using simple “ps” command

Repeat above procedure on all Kafka brokers

 

Verify custom metrics

Login to ganglia server and go to rrd directory ( by default it is /var/lib/ganglia/rrds/ ) and check if there are new rrd files for kafka metrics.

You should see output like below (output is truncated)

Go to ganglia web UI –>  select hadoopkafka from below highlighted dropdown

Select “custom.metrics” from below highlighted dropdown

That’s all! 

Kafka integration with Ganglia的更多相关文章

  1. Structured Streaming + Kafka Integration Guide 结构化流+Kafka集成指南 (Kafka broker version 0.10.0 or higher)

    用于Kafka 0.10的结构化流集成从Kafka读取数据并将数据写入到Kafka. 1. Linking 对于使用SBT/Maven项目定义的Scala/Java应用程序,用以下工件artifact ...

  2. Spark Streaming + Kafka Integration Guide原文翻译及解析

    前面写了关于kafka和spark streaming的结合使用(https://www.cnblogs.com/qfxydtk/p/11662591.html),其具体使用用法其实来自于原文:htt ...

  3. Spark踩坑记——Spark Streaming+Kafka

    [TOC] 前言 在WeTest舆情项目中,需要对每天千万级的游戏评论信息进行词频统计,在生产者一端,我们将数据按照每天的拉取时间存入了Kafka当中,而在消费者一端,我们利用了spark strea ...

  4. Spark Streaming+Kafka

    Spark Streaming+Kafka 前言 在WeTest舆情项目中,需要对每天千万级的游戏评论信息进行词频统计,在生产者一端,我们将数据按照每天的拉取时间存入了Kafka当中,而在消费者一端, ...

  5. Spark集群 + Akka + Kafka + Scala 开发(4) : 开发一个Kafka + Spark的应用

    前言 在Spark集群 + Akka + Kafka + Scala 开发(1) : 配置开发环境中,我们已经部署好了一个Spark的开发环境. 在Spark集群 + Akka + Kafka + S ...

  6. 5分钟spark streaming实践之 与kafka联姻

    你:kafka是什么? 我:嗯,这个嘛..看官网. Apache Kafka® is a distributed streaming platform Kafka is generally used ...

  7. Offset Management For Apache Kafka With Apache Spark Streaming

    An ingest pattern that we commonly see being adopted at Cloudera customers is Apache Spark Streaming ...

  8. Spark streaming消费Kafka的正确姿势

    前言 在游戏项目中,需要对每天千万级的游戏评论信息进行词频统计,在生产者一端,我们将数据按照每天的拉取时间存入了Kafka当中,而在消费者一端,我们利用了spark streaming从kafka中不 ...

  9. 【Spark】SparkStreaming-输出到Kafka

    SparkStreaming-输出到Kafka sparkstreaming output kafka_百度搜索 SparkStreaming采用直连方式(Direct Approach)获取Kafk ...

随机推荐

  1. Liunx-mv命令

    mv要是不明白什么意思,你就把它想象成Windows里面剪切文件夹/文件,然后再去粘贴的操作,你就会明白的. 1. 移动一个文件夹(rightr文件夹,移动到/201904/a目录) 出现这个错误的原 ...

  2. Luogu P5284 [十二省联考2019]字符串问题

    好难写的字符串+数据结构问题,写+调了一下午的说 首先理解题意后我们对问题进行转化,对于每个字符串我们用一个点来代表它们,其中\(A\)类串的点权为它们的长度,\(B\)类串的权值为\(0\) 这样我 ...

  3. lambda和匿名内部类使用外部变量为什么要语义final?

    今天群里讨论java的lambda实现. 后来不断衍生谈到了为什么lambda和匿名内部类只能使用语义final的外部变量. 最开始以为是java的lambda实现问题,编译期魔法会把外部引用作为参数 ...

  4. Python + PyQt5 实现美剧爬虫可视工具(二)

    美剧<权力的游戏>终于开播最后一季了,在上周写了个简单的可视化美剧的爬虫软件来爬取美剧,链接:https://www.cnblogs.com/weijiutao/p/10614694.ht ...

  5. java~lombok里的Builder注解

    lombok注解在java进行编译时进行代码的构建,对于java对象的创建工作它可以更优雅,不需要写多余的重复的代码,这对于JAVA开发人员是很重要的,在出现lombok之后,对象的创建工作更提供Bu ...

  6. Kafka、ActiveMQ、RabbitMQ、RocketMQ 区别以及高可用原理

    为什么使用消息队列 其实就是问问你消息队列都有哪些使用场景,然后你项目里具体是什么场景,说说你在这个场景里用消息队列是什么? 面试官问你这个问题,期望的一个回答是说,你们公司有个什么业务场景,这个业务 ...

  7. Asp.Net Core 轻松学-利用xUnit进行主机级别的网络集成测试

    前言     在开发 Asp.Net Core 应用程序的过程中,我们常常需要对业务代码编写单元测试,这种方法既快速又有效,利用单元测试做代码覆盖测试,也是非常必要的事情:但是,但我们需要对系统进行集 ...

  8. [翻译] 使用 Python 创建你自己的 Shell:Part II

    目录 使用 Python 创建你自己的 Shell:Part II 原文链接与说明 步骤 4:内置命令 最后的想法 使用 Python 创建你自己的 Shell:Part II 原文链接与说明 htt ...

  9. Docker最全教程之使用Tencent Hub来完成CI(九)

    使用Tencent Hub来完成CI 关于Tencent Hub Tencent Hub是腾讯出品的DevOps服务.主要提供多存储格式的版本管理,支持Docker Image.Binary.Helm ...

  10. Windows Server 2019 容器化探索-Docker安装

    Docker on Windows Server 2019 微软自Windows Server 2016开始支持Docker,今天我们将介绍在Windows Server 2019上安装并使用Dock ...