telegraf 学习一 基本安装
telegraf 是influxdata 开发的一个插件驱动的服务器代理,可以方便的用来收集以及报告系统的metrics
我使用mac 系统,测试安装使用了brew
安装
- 下载地址
说明官方也提供了mac版本
https://github.com/influxdata/telegraf/releases
- linux 系统安装
下载对应版本即可 - mac 系统安装
brew update
brew install telegraf
基本使用
- 生成运行配置文件
安装好的二进制文件已经包含了生成配置文件的命令,,以下是一个简单的采集cpu 、内存使用情况的,同时输出到
influxdb
telegraf -sample-config -input-filter cpu:mem -output-filter influxdb > telegraf.conf
内容如下:
# Telegraf Configuration
#
# Telegraf is entirely plugin driven. All metrics are gathered from the
# declared inputs, and sent to the declared outputs.
#
# Plugins must be declared in here to be active.
# To deactivate a plugin, comment out the name and any variables.
#
# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
# file would generate.
#
# Environment variables can be used anywhere in this config file, simply surround
# them with ${}. For strings the variable must be within quotes (ie, "${STR_VAR}"),
# for numbers and booleans they should be plain (ie, ${INT_VAR}, ${BOOL_VAR})
# Global tags can be specified here in key="value" format.
[global_tags]
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
# rack = "1a"
## Environment variables can be used as tags, and throughout the config file
# user = "$USER"
# Configuration for telegraf agent
[agent]
## Default data collection interval for all inputs
interval = "10s"
## Rounds collection interval to 'interval'
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = true
## Telegraf will send metrics to outputs in batches of at most
## metric_batch_size metrics.
## This controls the size of writes that Telegraf sends to output plugins.
metric_batch_size = 1000
## Maximum number of unwritten metrics per output.
metric_buffer_limit = 10000
## Collection jitter is used to jitter the collection by a random amount.
## Each plugin will sleep for a random time within jitter before collecting.
## This can be used to avoid many plugins querying things like sysfs at the
## same time, which can have a measurable effect on the system.
collection_jitter = "0s"
## Default flushing interval for all outputs. Maximum flush_interval will be
## flush_interval + flush_jitter
flush_interval = "10s"
## Jitter the flush interval by a random amount. This is primarily to avoid
## large write spikes for users running a large number of telegraf instances.
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s"
## By default or when set to "0s", precision will be set to the same
## timestamp order as the collection interval, with the maximum being 1s.
## ie, when interval = "10s", precision will be "1s"
## when interval = "250ms", precision will be "1ms"
## Precision will NOT be used for service inputs. It is up to each individual
## service input to set the timestamp at the appropriate precision.
## Valid time units are "ns", "us" (or "µs"), "ms", "s".
precision = ""
## Log at debug level.
# debug = false
## Log only error level messages.
# quiet = false
## Log file name, the empty string means to log to stderr.
# logfile = ""
## The logfile will be rotated after the time interval specified. When set
## to 0 no time based rotation is performed.
# logfile_rotation_interval = "0d"
## The logfile will be rotated when it becomes larger than the specified
## size. When set to 0 no size based rotation is performed.
# logfile_rotation_max_size = "0MB"
## Maximum number of rotated archives to keep, any older logs are deleted.
## If set to -1, no archives are removed.
# logfile_rotation_max_archives = 5
## Override default hostname, if empty use os.Hostname()
hostname = ""
## If set to true, do no set the "host" tag in the telegraf agent.
omit_hostname = false
###############################################################################
# OUTPUT PLUGINS #
###############################################################################
# Configuration for sending metrics to InfluxDB
[[outputs.influxdb]]
## The full HTTP or UDP URL for your InfluxDB instance.
##
## Multiple URLs can be specified for a single cluster, only ONE of the
## urls will be written to each interval.
# urls = ["unix:///var/run/influxdb.sock"]
# urls = ["udp://127.0.0.1:8089"]
# urls = ["http://127.0.0.1:8086"]
## The target database for metrics; will be created as needed.
## For UDP url endpoint database needs to be configured on server side.
# database = "telegraf"
## The value of this tag will be used to determine the database. If this
## tag is not set the 'database' option is used as the default.
# database_tag = ""
## If true, no CREATE DATABASE queries will be sent. Set to true when using
## Telegraf with a user without permissions to create databases or when the
## database already exists.
# skip_database_creation = false
## Name of existing retention policy to write to. Empty string writes to
## the default retention policy. Only takes effect when using HTTP.
# retention_policy = ""
## Write consistency (clusters only), can be: "any", "one", "quorum", "all".
## Only takes effect when using HTTP.
# write_consistency = "any"
## Timeout for HTTP messages.
# timeout = "5s"
## HTTP Basic Auth
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
## HTTP User-Agent
# user_agent = "telegraf"
## UDP payload size is the maximum packet size to send.
# udp_payload = "512B"
## Optional TLS Config for use on HTTP connections.
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## HTTP Proxy override, if unset values the standard proxy environment
## variables are consulted to determine which proxy, if any, should be used.
# http_proxy = "http://corporate.proxy:3128"
## Additional HTTP headers
# http_headers = {"X-Special-Header" = "Special-Value"}
## HTTP Content-Encoding for write request body, can be set to "gzip" to
## compress body or "identity" to apply no encoding.
# content_encoding = "identity"
## When true, Telegraf will output unsigned integers as unsigned values,
## i.e.: "42u". You will need a version of InfluxDB supporting unsigned
## integer values. Enabling this option will result in field type errors if
## existing data has been written.
# influx_uint_support = false
###############################################################################
# PROCESSOR PLUGINS #
###############################################################################
# # Convert values to another metric value type
# [[processors.converter]]
# ## Tags to convert
# ##
# ## The table key determines the target type, and the array of key-values
# ## select the keys to convert. The array may contain globs.
# ## <target-type> = [<tag-key>...]
# [processors.converter.tags]
# string = []
# integer = []
# unsigned = []
# boolean = []
# float = []
#
# ## Fields to convert
# ##
# ## The table key determines the target type, and the array of key-values
# ## select the keys to convert. The array may contain globs.
# ## <target-type> = [<field-key>...]
# [processors.converter.fields]
# tag = []
# string = []
# integer = []
# unsigned = []
# boolean = []
# float = []
# # Map enum values according to given table.
# [[processors.enum]]
# [[processors.enum.mapping]]
# ## Name of the field to map
# field = "status"
#
# ## Name of the tag to map
# # tag = "status"
#
# ## Destination tag or field to be used for the mapped value. By default the
# ## source tag or field is used, overwriting the original value.
# dest = "status_code"
#
# ## Default value to be used for all values not contained in the mapping
# ## table. When unset, the unmodified value for the field will be used if no
# ## match is found.
# # default = 0
#
# ## Table of mappings
# [processors.enum.mapping.value_mappings]
# green = 1
# amber = 2
# red = 3
# # Apply metric modifications using override semantics.
# [[processors.override]]
# ## All modifications on inputs and aggregators can be overridden:
# # name_override = "new_name"
# # name_prefix = "new_name_prefix"
# # name_suffix = "new_name_suffix"
#
# ## Tags to be added (all values must be strings)
# # [processors.override.tags]
# # additional_tag = "tag_value"
# # Parse a value in a specified field/tag(s) and add the result in a new metric
# [[processors.parser]]
# ## The name of the fields whose value will be parsed.
# parse_fields = []
#
# ## If true, incoming metrics are not emitted.
# drop_original = false
#
# ## If set to override, emitted metrics will be merged by overriding the
# ## original metric using the newly parsed metrics.
# merge = "override"
#
# ## The dataformat to be read from files
# ## Each data format has its own unique set of configuration options, read
# ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# data_format = "influx"
# # Print all metrics that pass through this filter.
# [[processors.printer]]
# # Transforms tag and field values with regex pattern
# [[processors.regex]]
# ## Tag and field conversions defined in a separate sub-tables
# # [[processors.regex.tags]]
# # ## Tag to change
# # key = "resp_code"
# # ## Regular expression to match on a tag value
# # pattern = "^(\\d)\\d\\d$"
# # ## Pattern for constructing a new value (${1} represents first subgroup)
# # replacement = "${1}xx"
#
# # [[processors.regex.fields]]
# # key = "request"
# # ## All the power of the Go regular expressions available here
# # ## For example, named subgroups
# # pattern = "^/api(?P<method>/[\\w/]+)\\S*"
# # replacement = "${method}"
# # ## If result_key is present, a new field will be created
# # ## instead of changing existing field
# # result_key = "method"
#
# ## Multiple conversions may be applied for one field sequentially
# ## Let's extract one more value
# # [[processors.regex.fields]]
# # key = "request"
# # pattern = ".*category=(\\w+).*"
# # replacement = "${1}"
# # result_key = "search_category"
# # Rename measurements, tags, and fields that pass through this filter.
# [[processors.rename]]
# # Perform string processing on tags, fields, and measurements
# [[processors.strings]]
# ## Convert a tag value to uppercase
# # [[processors.strings.uppercase]]
# # tag = "method"
#
# ## Convert a field value to lowercase and store in a new field
# # [[processors.strings.lowercase]]
# # field = "uri_stem"
# # dest = "uri_stem_normalised"
#
# ## Trim leading and trailing whitespace using the default cutset
# # [[processors.strings.trim]]
# # field = "message"
#
# ## Trim leading characters in cutset
# # [[processors.strings.trim_left]]
# # field = "message"
# # cutset = "\t"
#
# ## Trim trailing characters in cutset
# # [[processors.strings.trim_right]]
# # field = "message"
# # cutset = "\r\n"
#
# ## Trim the given prefix from the field
# # [[processors.strings.trim_prefix]]
# # field = "my_value"
# # prefix = "my_"
#
# ## Trim the given suffix from the field
# # [[processors.strings.trim_suffix]]
# # field = "read_count"
# # suffix = "_count"
#
# ## Replace all non-overlapping instances of old with new
# # [[processors.strings.replace]]
# # measurement = "*"
# # old = ":"
# # new = "_"
# # Print all metrics that pass through this filter.
# [[processors.topk]]
# ## How many seconds between aggregations
# # period = 10
#
# ## How many top metrics to return
# # k = 10
#
# ## Over which tags should the aggregation be done. Globs can be specified, in
# ## which case any tag matching the glob will aggregated over. If set to an
# ## empty list is no aggregation over tags is done
# # group_by = ['*']
#
# ## Over which fields are the top k are calculated
# # fields = ["value"]
#
# ## What aggregation to use. Options: sum, mean, min, max
# # aggregation = "mean"
#
# ## Instead of the top k largest metrics, return the bottom k lowest metrics
# # bottomk = false
#
# ## The plugin assigns each metric a GroupBy tag generated from its name and
# ## tags. If this setting is different than "" the plugin will add a
# ## tag (which name will be the value of this setting) to each metric with
# ## the value of the calculated GroupBy tag. Useful for debugging
# # add_groupby_tag = ""
#
# ## These settings provide a way to know the position of each metric in
# ## the top k. The 'add_rank_field' setting allows to specify for which
# ## fields the position is required. If the list is non empty, then a field
# ## will be added to each and every metric for each string present in this
# ## setting. This field will contain the ranking of the group that
# ## the metric belonged to when aggregated over that field.
# ## The name of the field will be set to the name of the aggregation field,
# ## suffixed with the string '_topk_rank'
# # add_rank_fields = []
#
# ## These settings provide a way to know what values the plugin is generating
# ## when aggregating metrics. The 'add_agregate_field' setting allows to
# ## specify for which fields the final aggregation value is required. If the
# ## list is non empty, then a field will be added to each every metric for
# ## each field present in this setting. This field will contain
# ## the computed aggregation for the group that the metric belonged to when
# ## aggregated over that field.
# ## The name of the field will be set to the name of the aggregation field,
# ## suffixed with the string '_topk_aggregate'
# # add_aggregate_fields = []
###############################################################################
# AGGREGATOR PLUGINS #
###############################################################################
# # Keep the aggregate basicstats of each metric passing through.
# [[aggregators.basicstats]]
# ## The period on which to flush & clear the aggregator.
# period = "30s"
# ## If true, the original metric will be dropped by the
# ## aggregator and will not get sent to the output plugins.
# drop_original = false
#
# ## Configures which basic stats to push as fields
# # stats = ["count", "min", "max", "mean", "stdev", "s2", "sum"]
# # Report the final metric of a series
# [[aggregators.final]]
# ## The period on which to flush & clear the aggregator.
# period = "30s"
# ## If true, the original metric will be dropped by the
# ## aggregator and will not get sent to the output plugins.
# drop_original = false
#
# ## The time that a series is not updated until considering it final.
# series_timeout = "5m"
# # Create aggregate histograms.
# [[aggregators.histogram]]
# ## The period in which to flush the aggregator.
# period = "30s"
#
# ## If true, the original metric will be dropped by the
# ## aggregator and will not get sent to the output plugins.
# drop_original = false
#
# ## If true, the histogram will be reset on flush instead
# ## of accumulating the results.
# reset = false
#
# ## Example config that aggregates all fields of the metric.
# # [[aggregators.histogram.config]]
# # ## The set of buckets.
# # buckets = [0.0, 15.6, 34.5, 49.1, 71.5, 80.5, 94.5, 100.0]
# # ## The name of metric.
# # measurement_name = "cpu"
#
# ## Example config that aggregates only specific fields of the metric.
# # [[aggregators.histogram.config]]
# # ## The set of buckets.
# # buckets = [0.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]
# # ## The name of metric.
# # measurement_name = "diskio"
# # ## The concrete fields of metric
# # fields = ["io_time", "read_time", "write_time"]
# # Keep the aggregate min/max of each metric passing through.
# [[aggregators.minmax]]
# ## General Aggregator Arguments:
# ## The period on which to flush & clear the aggregator.
# period = "30s"
# ## If true, the original metric will be dropped by the
# ## aggregator and will not get sent to the output plugins.
# drop_original = false
# # Count the occurrence of values in fields.
# [[aggregators.valuecounter]]
# ## General Aggregator Arguments:
# ## The period on which to flush & clear the aggregator.
# period = "30s"
# ## If true, the original metric will be dropped by the
# ## aggregator and will not get sent to the output plugins.
# drop_original = false
# ## The fields for which the values will be counted
# fields = []
###############################################################################
# INPUT PLUGINS #
###############################################################################
# Read metrics about cpu usage
[[inputs.cpu]]
## Whether to report per-cpu stats or not
percpu = true
## Whether to report total system cpu stats or not
totalcpu = true
## If true, collect raw CPU time metrics.
collect_cpu_time = false
## If true, compute and report the sum of all non-idle CPU states.
report_active = false
# Read metrics about memory usage
[[inputs.mem]]
# no configuration
- 测试模式运行
可以方便进行调试输出
telegraf --config telegraf.conf --test
内容:
2019-07-27T15:36:34Z I! Starting Telegraf 1.11.3
> cpu,cpu=cpu0,host=dalongrong.local usage_guest=0,usage_guest_nice=0,usage_idle=38,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=6,usage_user=56 1564241795000000000
> cpu,cpu=cpu1,host=dalongrong.local usage_guest=0,usage_guest_nice=0,usage_idle=96,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=2,usage_user=2 1564241795000000000
> cpu,cpu=cpu2,host=dalongrong.local usage_guest=0,usage_guest_nice=0,usage_idle=44,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=4,usage_user=52 1564241795000000000
> cpu,cpu=cpu3,host=dalongrong.local usage_guest=0,usage_guest_nice=0,usage_idle=100,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=0,usage_user=0 1564241795000000000
> cpu,cpu=cpu4,host=dalongrong.local usage_guest=0,usage_guest_nice=0,usage_idle=54,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=2,usage_user=44 1564241795000000000
- 启动
因为我没有安装influxdb,所以会有错误信息,安装方式可以使用容器
telegraf --config telegraf.conf
日志
2019-07-27T15:34:27Z I! Starting Telegraf 1.11.3
2019-07-27T15:34:27Z I! Loaded inputs: cpu mem
2019-07-27T15:34:27Z I! Loaded aggregators:
2019-07-27T15:34:27Z I! Loaded processors:
2019-07-27T15:34:27Z I! Loaded outputs: influxdb
2019-07-27T15:34:27Z I! Tags enabled: host=dalongrong.local
2019-07-27T15:34:27Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"dalongrong.local", Flush Interval:10s
2019-07-27T15:34:27Z W! [outputs.influxdb] when writing to [http://localhost:8086]: database "" creation failed: Post http://localhost:8086/query: dial tcp [::1]:8086: connect: connection refused
2019-07-27T15:34:40Z E! [outputs.influxdb] when writing to [http://localhost:8086]: Post http://localhost:8086/write?db=telegraf: dial tcp [::1]:8086: connect: connection refused
telegraf 包含的命令
- help 命令
学习包含的命令可以快速的了解工具的使用
Telegraf, The plugin-driven server agent for collecting and reporting metrics.
Usage:
telegraf [commands|flags]
The commands & flags are:
config print out full sample configuration to stdout
version print the version to stdout
--aggregator-filter <filter> filter the aggregators to enable, separator is :
--config <file> configuration file to load
--config-directory <directory> directory containing additional *.conf files
--debug turn on debug logging
--input-filter <filter> filter the inputs to enable, separator is :
--input-list print available input plugins.
--output-filter <filter> filter the outputs to enable, separator is :
--output-list print available output plugins.
--pidfile <file> file to write our pid to
--pprof-addr <address> pprof address to listen on, don't activate pprof if empty
--processor-filter <filter> filter the processors to enable, separator is :
--quiet run in quiet mode
--section-filter filter config sections to output, separator is :
Valid values are 'agent', 'global_tags', 'outputs',
'processors', 'aggregators' and 'inputs'
--sample-config print out full sample configuration
--test gather metrics, print them out, and exit;
processors, aggregators, and outputs are not run
--usage <plugin> print usage for a plugin, ie, 'telegraf --usage mysql'
--version display the version and exit
Examples:
# generate a telegraf config file:
telegraf config > telegraf.conf
# generate config with only cpu input & influxdb output plugins defined
telegraf --input-filter cpu --output-filter influxdb config
# run a single telegraf collection, outputing metrics to stdout
telegraf --config telegraf.conf --test
# run telegraf with all plugins defined in config file
telegraf --config telegraf.conf
# run telegraf, enabling the cpu & memory input, and influxdb output plugins
telegraf --config telegraf.conf --input-filter cpu:mem --output-filter influxdb
# run telegraf with pprof
telegraf --config telegraf.conf --pprof-addr localhost:6060
说明
以上就是一个简单的安装,以及基本使用,后边会关注下input,output ,aggregator, processor的使用
参考资料
https://github.com/influxdata/telegraf
https://docs.influxdata.com/telegraf/v1.11/
telegraf 学习一 基本安装的更多相关文章
- GitHub学习心得之 安装配置与多帐号管理
作者:枫雪庭 出处:http://www.cnblogs.com/FengXueTing-px/ 欢迎转载 GitHub学习心得之 安装配置与多帐号管理 1.前言2.GitHub Linux安装(ub ...
- 学习Linux系列--安装Ubuntu
最近学习Linux,使用虚拟机太不方便,于是购买了阿里云最便宜的云主机作为学习设备. 本系列文章记录了个人学习过程的点点滴滴. 学习Linux系列--安装Ubuntu 学习Linux系列--安装软件环 ...
- 学习Sass之安装Sass(一)
为什么使用Sass 作为前端(html.javascript.css)的三大马车之一的css,一直以静态语言存在,HTML5火遍大江南北了.javascript由于NODE.JS而成为目前前后端统一开 ...
- CentOS学习笔记--Tomcat安装
Tomcat安装 通常情况下我们要配置Tomcat是很容易的一件事情,但是如果您要架设多用户多服务的Java虚拟主机就不那么容易了.其中最大的一个问题就是Tomcat执行权限.普通方式配置的Tomca ...
- 学习Sass之安装Sass
学习Sass之安装Sass 为什么使用Sass 作为前端(html.javascript.css)的三大马车之一的css,一直以静态语言存在,HTML5火遍大江南北了.javascript由于NODE ...
- 深度学习框架-caffe安装-环境[Mac OSX 10.12]
深度学习框架-caffe安装 [Mac OSX 10.12] [参考资源] 1.英文原文:(使用GPU) [http://hoondy.com/2015/04/03/how-to-install-ca ...
- 深度学习框架-caffe安装-Mac OSX 10.12
p.p1 { margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px ".PingFang SC"; color: #454545 } p.p2 ...
- 【分布式】ZooKeeper学习之一:安装及命令行使用
ZooKeeper学习之一:安装及命令行使用 一直都想着好好学一学分布式系统,但是这拖延症晚期也是没得治了,所以干脆强迫自己来写一个系列博客,从zk的安装使用.客户端调用.涉及到的分布式原理.选举过程 ...
- libevent的入门学习-库的安装【转】
转自:https://blog.csdn.net/lookintosky/article/details/61658067 libevent的入门学习-库的安装最近开始接触Linux应用层的东西,发现 ...
随机推荐
- 【LEETCODE】66、字符串分类,hard级别,题目:32,72,76
package y2019.Algorithm.str.hard; import java.util.Stack; /** * @ProjectName: cutter-point * @Packag ...
- IDEA 开发插件
Alibaba Java Code Guidelines 阿里巴巴推出的一款Java代码规约扫描插件,按照<阿里巴巴Java开发手册>规定对代码风格以及质量进行实时检测.约束.强推.ecl ...
- [SOJ #498]隔膜(2019-10-30考试)/[POJ2152]Fire
题目大意:有一棵$n$个点的带边权树,第$i$个点有两个值$w_i,d_i$,表示在这个点做标记的代价为$w_i$,且这个点距离$d_i$以内至少要有一个点被标记,为最小代价.$n\leqslant6 ...
- Codeforces Round #581 (Div. 2)
A:暴力. #include<cstdio> #include<cstring> #include<iostream> #include<algorithm& ...
- Spark数据倾斜解决方案及shuffle原理
数据倾斜调优与shuffle调优 数据倾斜发生时的现象 1)个别task的执行速度明显慢于绝大多数task(常见情况) 2)spark作业突然报OOM异常(少见情况) 数据倾斜发生的原理 在进行shu ...
- Linux 7 重置root密码
在运维工作中经常会遇到不知道密码,密码遗忘,密码被他人修改过的情况,使用这种方式扫清你一切烦恼! 1.启动Linux系统,在出现引导界面时,按“e”键,进入内核编辑界面:2.找到有“linux16”的 ...
- EF6 + MySql 建立项目引用失败
EF6 + MySql 建立项目 步骤 在项目中使用” NuGet” 包添加 EntityFramework 和 MySql.Data ,如下图 (1) 在NuGet界面中的“浏览”选项卡 ...
- 将整个 project 资源打包
<build> <finalName>bootstrap</finalName> <sourceDirectory>${basedir}/src/mai ...
- Android为TV端助力之无法依赖constraint-layout:1.1.3(转发)
原文地址 http://fanjiajia.cn/2018/09/25/Android%20Studio%20Could%20not%20resolve%20com.android.support.c ...
- zookeeper 事务日志查看
在version下的日志是二进制文件,查看需要转换 创建/data/middleware/zookeeper-3.4.14/translog.sh 脚本 格式化命令: java -classpath ...