Have you ever wondered if Logstash was sending data to your outputs? There's a brand new way to check if Logstash has a "pulse." Introducing the heartbeat input plugin! It’s bundled with Logstash 1.5 so you can start using it immediately!

Why?

Logstash currently has a single pipeline. All events generated by inputs travel through the filter block, and then out of Logstash through the output block.

Even if you have multiple outputs and are separating events using conditionals all events pass through this single pipeline. If any one of your outputs backs up, the entire pipeline stops flowing. The heartbeat plugin takes advantage of this to help you know when the flow of events slows, or stops altogether.

How?

The heartbeat plugin sends a message at a definable interval. Here are the options available for the message configuration parameter:

  • Any string value: The message field will contain the specified string value.  If unset, the message field will contain the string value ok
  • epoch: Rather than a message field, this will result in a clock field which will contain the current epoch timestamp (UTC). If you are unfamiliar with this, it means the number of seconds elapsed since Jan 1, 1970.
  • sequence: Rather than a message field, this will result in a clock field which will contain a number. At start time, the sequence starts at zero and will increment each time your specified interval time has elapsed. Note that this means that if you restart Logstash, the counter resets to zero again.

Examples

Be sure to assign a type to your heartbeat events. This will make it possible to conditionally act on these events later on.

"ok" Message

Perhaps you only want to know that Logstash is still sending messages. Your monitoring system can interpret an "ok" received within a time window as an indicator that everything is working. Your monitoring system would be responsible for tracking the time between "ok" messages.

I can send the default "ok" message every 10 seconds like this:

input {
heartbeat {
interval => 10
type => "heartbeat"
}
# ... other input blocks go here
}

The events would look like this:

{"message":"ok","host":"example.com","@version":"1","@timestamp":"2015-03-18T17:05:24.696Z","type":"heartbeat"}
{"message":"ok","host":"example.com","@version":"1","@timestamp":"2015-03-18T17:05:34.696Z","type":"heartbeat"}
{"message":"ok","host":"example.com","@version":"1","@timestamp":"2015-03Read More

Epoch timestamp

Perhaps your monitoring system uses unix timestamps to track event timing (like Zabbix, for example). If so, you can use the epoch timestamp in the clock field to calculate the difference between "now" and when Logstash generated the heartbeat event. You can calculate lag in this way. This may be especially useful if you inject the heartbeat before events go into a broker, or buffering system, like Redis, RabbitMQ, or Kafka. If the buffer begins to fill up, the time difference will become immediately apparent. You could use this to track the elapsed time--from event creation, to indexing--for your entire Logstash pipeline.

This example will send the epoch timestamp in the clock field:

input {

  heartbeat {
message => "epoch"
interval => 10
type => "heartbeat"
}
# ... other input blocks go here
}

The events would look like this:

{"clock":1426698365,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:06:05.360Z","type":"heartbeat"}
{"clock":1426698375,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:06:15.364Z","type":"heartbeat"}
{"clock":1426698385,"host":"example.com","@version":"1","@timestamp":"2015Read More

Sequence of numbers

This example makes it easy to immediately check if new events are occurring because the clock will continuously increase.

input {
heartbeat {
message => "sequence"
interval => 10
type => "heartbeat"
}
# ... other input blocks go here
}

The events would look like this:

{"clock":1,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:08:13.024Z","type":"heartbeat"}
{"clock":2,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:08:23.027Z","type":"heartbeat"}
{"clock":3,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17Read More

Output

Now let's add a conditional to send this to our monitoring system, and not to our other outputs:

output {
if [type] == "heartbeat" {
# Define the output block for your monitoring system here
} else {
# ... other output blocks go here
}
}

Of course, if you do want your heartbeat messages to be indexed alongside your log data, you are free to do so.

Conclusion

The new heartbeat plugin provides a simple, but effective way to monitor the availability of your Logstash instances right now. We have big plans for the future, though.  Take a look at our road map!

In the future we plan to have a full API, complete with visibility into the pipeline, plugin performance, queue status, event throughput and so much more.  We are super excited to bring these improvements to you!

Happy Logstashing!

input {
heartbeat {
tags => ["heartbeat"]
type => "heartbeat"
message => "epoch"
interval =>
}
} output {
if "heartbeat" in [tags] {
file {
path => "/var/log/cloudchef/logstash/logstash-hearbeat.log"
}
}
}

How to check Logstash's pulse的更多相关文章

  1. DIY PIXHAWK APM等飞控用的PPM转接板

    需要的硬件 一块arduino pro mini(推荐这个,比较小,当然如果你没有USB转转口的烧写工具买个ardunio nano板也是不错的,直接用USB线连接电脑就可以,用nano板要注意.它的 ...

  2. logstash服务启动脚本

    logstash服务启动脚本 最近在弄ELK,发现logstash没有sysv类型的服务启动脚本,于是按照网上一个老外提供的模板自己进行修改 #添加用户 useradd logstash -M -s ...

  3. 日志分析 第五章 安装logstash

    logstash是java应用,依赖JDK,首先需要安装JDK,在安装jdk过程中,logstash-2.3.4使用JDK-1.7版本有bug,使用JDK-1.8版本正常,因此我们安装JDK-1.8版 ...

  4. ELK——安装 logstash 2.2.0、elasticsearch 2.2.0 和 Kibana 3.0

    本文内容 Elasticsearch logstash Kibana 参考资料 本文介绍安装 logstash 2.2.0 和 elasticsearch 2.2.0,操作系统环境版本是 CentOS ...

  5. logstash

    logstash作为数据搜集器,主要分为三个部分:input->filter->output  作为pipeline的形式进行处理,支持复杂的操作,如发邮件等 input配置数据的输入和简 ...

  6. Centos7下使用ELK(Elasticsearch + Logstash + Kibana)搭建日志集中分析平台

    日志监控和分析在保障业务稳定运行时,起到了很重要的作用,不过一般情况下日志都分散在各个生产服务器,且开发人员无法登陆生产服务器,这时候就需要一个集中式的日志收集装置,对日志中的关键字进行监控,触发异常 ...

  7. 使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程

    使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程 先列出来总体启动流程: (1)启动zookeeper集群(hadoop01.hadoop02和hadoop03这3台机 ...

  8. Manage Spring Boot Logs with Elasticsearch, Logstash and Kibana

    下载地址:https://www.elastic.co/downloads When time comes to deploy a new project, one often overlooked ...

  9. logstash 安装zabbix插件

    <pre name="code" class="html">[root@xxyy yum.repos.d]# yum install ruby Lo ...

随机推荐

  1. 第80讲:List的泛型分析以及::类和Nil对象

    今天我们学习一下scala中的列表,List. 通过源码,我们可以发现,List类型是协变的,所以我们可以把Int类型的List赋值给Any型的List. 我们可以看到,List定义下有3个比较重要的 ...

  2. codeforces982A

    题意 给你个排列    10001 满足下列条件输出yes  否则输出no 1.不能有两个1相连 2.当点排列不能再加入1 全0判断一下 开头判断一下 结尾判断一下 #include <iost ...

  3. [au3]复制选择性粘贴文本到excel

    案例:在一张网页上有许多你要复制的内容,但是你必须一个一个找到他们,然后一个一个复制出来粘贴到excel表格里.时间一长你的眼睛容易花,而且复制多了容易出错. 方法:当然有许多方法可以自动化的做这一件 ...

  4. codeforces 475D

    题意:给定n(n<=100000)个1e9以内的数的数组a,然后最多有3*1e5的询问,对于每个询问,给定一个x,问有多少个(l<=r&&gcd(a[l],a[l+1].. ...

  5. Stringbuffer与substring

    1. Stringbuffer 有append()方法 Stringbuffer 其实是动态字符串数组 append()是往动态字符串数组添加,跟“xxxx”+“yyyy”相当那个‘+’号 跟Stri ...

  6. django创建分页

    前台html代码 <!DOCTYPE html> <html lang="en"> <head> <meta charset=" ...

  7. iOS笔记之UIKit_UITextField

    - (void)viewDidLoad { [super viewDidLoad]; //建立在你已经遵守了<协议UITextFieldDelegate> self.numTF.deleg ...

  8. unigui session超时时间设置

    unigui session超时时间设置 默认的SESSION超时时间是10分钟. 网络 SOCKET 程序,像 数据库,中间件,UNIGUI等...为了防止过多的僵死连接卡死服务端,服务端都会主动踢 ...

  9. Spring 使用javaconfig配置

    除了使用xml,spring提供javaconfig配置,下面是简单的例子: 1.声明接口 /** * */ package com.junge.demo.spring.service; /** * ...

  10. AlexNet详解3

    Reference. Krizhevsky A, Sutskever I, Hinton G E. ImageNet Classification with Deep Convolutional Ne ...