Have you ever wondered if Logstash was sending data to your outputs? There's a brand new way to check if Logstash has a "pulse." Introducing the heartbeat input plugin! It’s bundled with Logstash 1.5 so you can start using it immediately!

Why?

Logstash currently has a single pipeline. All events generated by inputs travel through the filter block, and then out of Logstash through the output block.

Even if you have multiple outputs and are separating events using conditionals all events pass through this single pipeline. If any one of your outputs backs up, the entire pipeline stops flowing. The heartbeat plugin takes advantage of this to help you know when the flow of events slows, or stops altogether.

How?

The heartbeat plugin sends a message at a definable interval. Here are the options available for the message configuration parameter:

  • Any string value: The message field will contain the specified string value.  If unset, the message field will contain the string value ok
  • epoch: Rather than a message field, this will result in a clock field which will contain the current epoch timestamp (UTC). If you are unfamiliar with this, it means the number of seconds elapsed since Jan 1, 1970.
  • sequence: Rather than a message field, this will result in a clock field which will contain a number. At start time, the sequence starts at zero and will increment each time your specified interval time has elapsed. Note that this means that if you restart Logstash, the counter resets to zero again.

Examples

Be sure to assign a type to your heartbeat events. This will make it possible to conditionally act on these events later on.

"ok" Message

Perhaps you only want to know that Logstash is still sending messages. Your monitoring system can interpret an "ok" received within a time window as an indicator that everything is working. Your monitoring system would be responsible for tracking the time between "ok" messages.

I can send the default "ok" message every 10 seconds like this:

input {
heartbeat {
interval => 10
type => "heartbeat"
}
# ... other input blocks go here
}

The events would look like this:

{"message":"ok","host":"example.com","@version":"1","@timestamp":"2015-03-18T17:05:24.696Z","type":"heartbeat"}
{"message":"ok","host":"example.com","@version":"1","@timestamp":"2015-03-18T17:05:34.696Z","type":"heartbeat"}
{"message":"ok","host":"example.com","@version":"1","@timestamp":"2015-03Read More

Epoch timestamp

Perhaps your monitoring system uses unix timestamps to track event timing (like Zabbix, for example). If so, you can use the epoch timestamp in the clock field to calculate the difference between "now" and when Logstash generated the heartbeat event. You can calculate lag in this way. This may be especially useful if you inject the heartbeat before events go into a broker, or buffering system, like Redis, RabbitMQ, or Kafka. If the buffer begins to fill up, the time difference will become immediately apparent. You could use this to track the elapsed time--from event creation, to indexing--for your entire Logstash pipeline.

This example will send the epoch timestamp in the clock field:

input {

  heartbeat {
message => "epoch"
interval => 10
type => "heartbeat"
}
# ... other input blocks go here
}

The events would look like this:

{"clock":1426698365,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:06:05.360Z","type":"heartbeat"}
{"clock":1426698375,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:06:15.364Z","type":"heartbeat"}
{"clock":1426698385,"host":"example.com","@version":"1","@timestamp":"2015Read More

Sequence of numbers

This example makes it easy to immediately check if new events are occurring because the clock will continuously increase.

input {
heartbeat {
message => "sequence"
interval => 10
type => "heartbeat"
}
# ... other input blocks go here
}

The events would look like this:

{"clock":1,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:08:13.024Z","type":"heartbeat"}
{"clock":2,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:08:23.027Z","type":"heartbeat"}
{"clock":3,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17Read More

Output

Now let's add a conditional to send this to our monitoring system, and not to our other outputs:

output {
if [type] == "heartbeat" {
# Define the output block for your monitoring system here
} else {
# ... other output blocks go here
}
}

Of course, if you do want your heartbeat messages to be indexed alongside your log data, you are free to do so.

Conclusion

The new heartbeat plugin provides a simple, but effective way to monitor the availability of your Logstash instances right now. We have big plans for the future, though.  Take a look at our road map!

In the future we plan to have a full API, complete with visibility into the pipeline, plugin performance, queue status, event throughput and so much more.  We are super excited to bring these improvements to you!

Happy Logstashing!

input {
heartbeat {
tags => ["heartbeat"]
type => "heartbeat"
message => "epoch"
interval =>
}
} output {
if "heartbeat" in [tags] {
file {
path => "/var/log/cloudchef/logstash/logstash-hearbeat.log"
}
}
}

How to check Logstash's pulse的更多相关文章

  1. DIY PIXHAWK APM等飞控用的PPM转接板

    需要的硬件 一块arduino pro mini(推荐这个,比较小,当然如果你没有USB转转口的烧写工具买个ardunio nano板也是不错的,直接用USB线连接电脑就可以,用nano板要注意.它的 ...

  2. logstash服务启动脚本

    logstash服务启动脚本 最近在弄ELK,发现logstash没有sysv类型的服务启动脚本,于是按照网上一个老外提供的模板自己进行修改 #添加用户 useradd logstash -M -s ...

  3. 日志分析 第五章 安装logstash

    logstash是java应用,依赖JDK,首先需要安装JDK,在安装jdk过程中,logstash-2.3.4使用JDK-1.7版本有bug,使用JDK-1.8版本正常,因此我们安装JDK-1.8版 ...

  4. ELK——安装 logstash 2.2.0、elasticsearch 2.2.0 和 Kibana 3.0

    本文内容 Elasticsearch logstash Kibana 参考资料 本文介绍安装 logstash 2.2.0 和 elasticsearch 2.2.0,操作系统环境版本是 CentOS ...

  5. logstash

    logstash作为数据搜集器,主要分为三个部分:input->filter->output  作为pipeline的形式进行处理,支持复杂的操作,如发邮件等 input配置数据的输入和简 ...

  6. Centos7下使用ELK(Elasticsearch + Logstash + Kibana)搭建日志集中分析平台

    日志监控和分析在保障业务稳定运行时,起到了很重要的作用,不过一般情况下日志都分散在各个生产服务器,且开发人员无法登陆生产服务器,这时候就需要一个集中式的日志收集装置,对日志中的关键字进行监控,触发异常 ...

  7. 使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程

    使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程 先列出来总体启动流程: (1)启动zookeeper集群(hadoop01.hadoop02和hadoop03这3台机 ...

  8. Manage Spring Boot Logs with Elasticsearch, Logstash and Kibana

    下载地址:https://www.elastic.co/downloads When time comes to deploy a new project, one often overlooked ...

  9. logstash 安装zabbix插件

    <pre name="code" class="html">[root@xxyy yum.repos.d]# yum install ruby Lo ...

随机推荐

  1. 谈谈HashMap线程不安全的体现

    原文出处: Hosee HashMap的原理以及如何实现,之前在JDK7与JDK8中HashMap的实现中已经说明了. 那么,为什么说HashMap是线程不安全的呢?它在多线程环境下,会发生什么情况呢 ...

  2. libtool 创建库的工具

    libtool 创建库的工具 1. 背景 在不同的系统中建立动态链接库的方法有很大的差别,这主要是因为每个系统对动态链接库的用法和实现并不相同,以及编译器对动态链接库支持的选项也不太一样. 对于开发人 ...

  3. hive 动态分区与混合分区

    hive的分区概念,相信大家都非常了解了.通过将数据放在hdfs不同的文件目录下,查表时,只扫描对应分区下的数据,避免了全表扫描. 提升了查询效率. 关于hive分区,我们还会用到多级分区.动态分区. ...

  4. Excel中单元格、超级链接形成超级链接单元格

    使用函数 HYPERLINK(超链接,显示文字) =HYPERLINK("http://www.cnblogs.com/Vpygamalion/","李汉超") ...

  5. C#如何在List里求某一列的數值的和SUM

    var X=Xlist.Sum(key => key.XXX);

  6. python操作excel及json

    有一个存着学生成绩的文件:stuscore.txt,里面存的是json串,json串读起来特别不直观,需要你写代码把它都写到excel中,并计算出总分和平均分,json格式如下: { ":[ ...

  7. 安装使用Entity Framework Power Tool Bate4 (Code First)从已建好的数据自动生成项目中的对应Model(新手贴,望各位大侠给予指点)

    从开始学习使用MVC以后,同时也开始接触EF,很多原理都不是太懂,只知道安装了EF以后,点击哪里可以生成数据库对应的Model,不用再自己手写Model.这里记录的就是如何从已建立好的数据库生成项目代 ...

  8. WTS 2.1.18124.1 彻底抛弃了 15063(Win 10 创意者更新)

    现在新建的WTS模板,默认最低版本是16299了,目标版本是17134 17134到来之前,就感觉到会这样,不过终究还是来了. 不支持15063的原因是导航菜单Windows.UI.Xaml.Cont ...

  9. HttpWebRequest 跳转后(301,302)ResponseUri乱码问题

    问题: 目标地址: http://www.baidu.com/baidu.php?url=a000000aa.7D_ifdr1XkSUzuBz3rd2ccvp2mFoJ3rOUsnx8OdxeOeOL ...

  10. 网络基础、ftp任务(进度条、计算文件大小、断点续传、搭建框架示例)

    一.网络基础 1.端口,是什么?为什么要有端口? 端口是为了将同一个电脑上的不同程序进行隔离. IP是找电脑:端口是找电脑上的应用程序: 端口范围:1 – 65535 :    1 - 1024 不要 ...