参考这个里边对API的调用

参考这里列出的可用配置
在正常的配置之外,需要额外添加的配置。添加这些配置以后,就可以在StreamTask里使用metrics的API来提供metrics信息
 
  1. 需要哪些配置?
    1. 设置用哪种reporter factory,Samza自带有Kafka和JMX两种
    2. 设置reporter使用的stream,包括设置这个stream的serde
    3. 注册这个reporter
#设置用来做为输出流的Stream
#设置samza-metrics这个stream的system
streams.samza-metrics.system= kafka 
#设置samza-metrics这个stream的名字,即在kafka里对应的topic的名字
streams.samza-metrics.stream= samza-metrics

#配置reporter
#设置reporter factory这里使用的MetricsSnapshotReporterFactory,把JSON格式的metrics发给Kafka。这个class要是Factory的名字,samza当前的document里的配置是错的
metrics.reporter.samza-metrics.class= org.apache.samza.metrics.reporter.MetricsSnapshotReporterFactory
 
#samza-metrics这个reporter10s发送一次metrics
metrics.reporter.samza-metrics.window.ms= 10000
 
#定义samza-metrics这个reporter对应的stream。此reporter使用kafka这个system下的samza-metrics这个stream。注意一定要使用
#system.stream这种写法,即写成kafka.samza-metrics
#这个配置是必须的,但是Samza的config文档里没有列出
metrics.reporter.samza-metrics.stream= kafka.samza-metrics
 
#注册samza-metrics这个reporter
metrics.reporters= samza-metrics

#配置serde
# Serializers 定义可以使用的Serde factory的名字
serializers.registry.json.class= org.apache.samza.serializers.JsonSerdeFactory
serializers.registry.metrics.class= org.apache.samza.serializers.MetricsSnapshotSerdeFactory
 
#定义samza-metrics这个stream,用来做消息serde的Serde factory为metrics。由于这个stream输出的是metrics的信息,因此必须使用这个serde factory
systems.kafka.streams.samza-metrics.samza.msg.serde= metrics
 
 
最后会有三种metrics。StramTask的、SystemProducer的、SystemConsumer的

{"metrics":{"org.apache.samza.container.TaskInstanceMetrics":{"process-calls":28575767,"messages-sent":0,"commit-calls":12,"window-skipped":30204500,"kafka-pizza-offset":"36950462","commit-skipped":30204488,"send-skipped":30204500,"window-calls":0,"send-calls":0},"class hs.samza.simple.SimpleKafkaTask":{"messageCount":28500000}},"header":{"reset-time":1398672217312,"job-id":"1","time":1398672937854,"host":"hadoop-node-1","container-name":"samza-container-2","source":"Partition-2","job-name":"my-samza-test","samza-version":"0.0.1","version":"0.0.1"}}

{"metrics":{"org.apache.samza.system.kafka.KafkaSystemProducerMetrics":{"kafka-producer-sends":38,"kafka-partition-2-producer-buffer-size":0,"kafka-flushes":38,"kafka-metricssnapshotreporterfactory-producer-buffer-size":0,"kafka-samza-container-2-producer-buffer-size":0,"kafka-producer-reconnects":0,"kafka-flush-sizes":38}},"header":{"reset-time":1398672217312,"job-id":"1","time":1398672937855,"host":"hadoop-node-1","container-name":"samza-container-2","source":"MetricsSnapshotReporterFactory","job-name":"my-samza-test","samza-version":"0.0.1","version":"0.0.1"}}

{"metrics":{"org.apache.samza.system.SystemConsumersMetrics":{"blocking-poll-timeout":10,"kafka-messages-per-poll":15667419,"chose-object":29071417,"kafka-ssp-fetches-per-poll":1501156,"max-buffered-messages-per-stream-partition":1000,"ssps-needed-by-chooser":1,"kafka-pizza-messages-chosen":29071417,"unprocessed-messages":0,"chose-null":1483650,"kafka-polls":15667420,"poll-timeout":10},"org.apache.samza.metrics.JvmMetrics":{"threads-runnable":5,"mem-heap-committed-mb":313.5625,"threads-new":0,"mem-non-heap-committed-mb":24.75,"mem-heap-used-mb":93.080475,"mem-non-heap-used-mb":24.63375,"threads-terminated":0,"ps marksweep-gc-time-millis":41,"ps scavenge-gc-count":380,"ps scavenge-gc-time-millis":16289,"gc-time-millis":16330,"threads-blocked":0,"threads-timed-waiting":6,"ps marksweep-gc-count":1,"threads-waiting":4,"gc-count":381},"org.apache.samza.container.SamzaContainerMetrics":{"process-null-envelopes":1483649,"process-envelopes":29071417,"process-calls":30555067,"commit-calls":30555066,"window-calls":30555066,"send-calls":30555066},"org.apache.samza.system.chooser.RoundRobinChooserMetrics":{"buffered-messages":0},"org.apache.samza.system.kafka.KafkaSystemConsumerMetrics":{"kafka-pizza-3-offset-change":37437828,"poll-count":15667420,"kafka-10.5.132.122-9092-topic-partitions":1,"no-more-messages-SystemStreamPartition [partition=Partition [partition=3], system=kafka, stream=pizza]":false,"kafka-10.5.132.122-9092-messages-read":36263,"blocking-poll-count-SystemStreamPartition [partition=Partition [partition=3], system=kafka, stream=pizza]":0,"kafka-pizza-3-bytes-read":1512471142,"kafka-pizza-3-messages-read":29071417,"kafka-10.5.132.122-9092-skipped-fetch-requests":8,"blocking-poll-timeout-count-SystemStreamPartition [partition=Partition [partition=3], system=kafka, stream=pizza]":728147,"kafka-pizza-3-messages-behind-high-watermark":0,"buffered-message-count-SystemStreamPartition [partition=Partition [partition=3], system=kafka, stream=pizza]":0,"kafka-10.5.132.122-9092-bytes-read":1512471142,"kafka-10.5.132.122-9092-reconnects":0},"org.apache.samza.system.kafka.KafkaSystemProducerMetrics":{"kafka-producer-sends":0,"kafka-flushes":14,"kafka-partition-3-producer-buffer-size":0,"kafka-producer-reconnects":0,"kafka-flush-sizes":0},"org.apache.samza.system.SystemProducersMetrics":{"partition-3-sends":0,"partition-3-flushes":14,"flushes":14,"sends":0}},"header":{"reset-time":1398672219649,"job-id":"1","time":1398673001854,"host":"hd-e.cdh","container-name":"samza-container-3","source":"samza-container-3","job-name":"my-samza-test","samza-version":"0.0.1","version":"0.0.1"}}

看着头都大了。应该还是JMX那种reporter看着舒服点。以后再试吧……

如何设置Samza的metrics的更多相关文章

  1. hystrix熔断器之metrics

    Metric概述 HystrixCommands和HystrixObservableCommands执行过程中,会产生执行的数据,这些数据对于观察调用的性能表现非常有用. 命令产生数据后,Metric ...

  2. iOS - UIStoryboard

    前言 NS_CLASS_AVAILABLE_IOS(5_0) @interface UIStoryboard : NSObject @available(iOS 5.0, *) public clas ...

  3. Spark调研笔记第2篇 - 怎样通过Sparkclient向Spark提交任务

    在上篇笔记的基础上,本文介绍Sparkclient的基本配置及Spark任务提交方式. 1. Sparkclient及基本配置 从Spark官网下载的pre-built包中集成了Sparkclient ...

  4. 开始使用Chronograf(官方说明)

    地址:https://docs.influxdata.com/chronograf/v1.6/introduction/getting-started/ 开始使用Chronograf 在本页面 入门概 ...

  5. 转: 使用Hystrix实现自动降级与依赖隔离

    使用Hystrix实现自动降级与依赖隔离 原创 2017年06月25日 17:28:01 标签: 异步 / 降级 869 这篇文章是记录了自己的一次集成Hystrix的经验,原本写在公司内部wiki里 ...

  6. 使用Hystrix实现自动降级与依赖隔离-微服务

    转载: https://www.jianshu.com/p/138f92aa83dc Hystrix出现的原因: hystrix是netflix开源的一个容灾框架,解决当外部依赖故障时拖垮业务系统.甚 ...

  7. Hystrix参数说明

    参数配置 参数说明 值 备注 groupKey productStockOpLog group标识,一个group使用一个线程池 commandKey addProductStockOpLog com ...

  8. [转帖]开始使用Chronograf

    地址:https://docs.influxdata.com/chronograf/v1.6/introduction/getting-started/ https://www.cnblogs.com ...

  9. 实战| 配置DataDog监控Apache Hudi应用指标

    1. 可用性 在Hudi最新master分支,由Hudi活跃贡献者Raymond Xu贡献了DataDog监控Hudi应用指标,该功能将在0.6.0 版本发布,也感谢Raymond的投稿. 2. 简介 ...

随机推荐

  1. Cocos2d-x实例:设置背景音乐与音效-HelloWorld场景实现

    HelloWorld场景就是游戏中的主菜单场景.HelloWorld.h文件代码如下: #define __HELLOWORLD_SCENE_H__ #include "cocos2d.h& ...

  2. jQuery实现CheckBox全选、全不选

    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/ ...

  3. 20141016--for 兔子

    Console.Write("请输入月数:"); int m =int.Parse(Console.ReadLine()); ;//成兔对数ct ;//小兔对数xt ;//幼兔对数 ...

  4. java新手笔记33 多线程、客户端、服务器

    1.Mouse package com.yfs.javase; public class Mouse { private int index = 1; private boolean isLive = ...

  5. .net core demo & docker images

    记录.net core 部署在docker 上的大概步骤便于以后查阅. PART 1 .net core web api demo 1.下载最新VS 2015 community 社区版免费使用. 2 ...

  6. mongoDB知识总结

    官方说明文档:https://docs.mongodb.com/manual/mongo/ 1 NoSQL 简介 NoSQL,全称是”Not Only Sql”,指的是非关系型的数据库(相对于关系型数 ...

  7. jquery实现2级联动

    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...

  8. requirejs实验001.对我来说,用AMD的方式来组织代码并不轻松.

    http://www.requirejs.org/ http://www.requirejs.cn/ http://requirejs.readthedocs.org/en/1.0.1/ 目录结构: ...

  9. CAF(C++ actor framework)使用随笔(同步发送 异步与同步等待)(三)

    c). 同步发送, 等待响应, 超时后收到1个系统消息. 贴上代码 #include <iostream> #include "caf/all.hpp" #includ ...

  10. httpd配置Gzip压缩

    以下设置在 /etc/httpd/conf/httpd.conf 文件末尾加入即可.(不同方式安装的httpd可能主配置文件位置不同,请自行查找) 一.mod_deflate模块:文件压缩 官方文档: ...