【原创】大数据基础之Logstash(4)高可用
logstash高可用体现为不丢数据(前提为服务器短时间内不可用后可恢复比如重启服务器或重启进程),具体有两个方面:
- 进程重启(服务器重启)
- 事件消息处理失败
在logstash中对应的解决方案为:
- Persistent Queues
- Dead Letter Queues
默认都没有开启;
另外可以通过docker或marathon或systemd来实现进程的自动重启;
As data flows through the event processing pipeline, Logstash may encounter situations that prevent it from delivering events to the configured output. For example, the data might contain unexpected data types, or Logstash might terminate abnormally.
To guard against data loss and ensure that events flow through the pipeline without interruption, Logstash provides the following data resiliency features.
- Persistent Queues protect against data loss by storing events in an internal queue on disk.
- Dead Letter Queues provide on-disk storage for events that Logstash is unable to process. You can easily reprocess events in the dead letter queue by using the dead_letter_queue input plugin.
These resiliency features are disabled by default.
1 Persistent Queues
By default, Logstash uses in-memory bounded queues between pipeline stages (inputs → pipeline workers) to buffer events. The size of these in-memory queues is fixed and not configurable. If Logstash experiences a temporary machine failure, the contents of the in-memory queue will be lost. Temporary machine failures are scenarios where Logstash or its host machine are terminated abnormally but are capable of being restarted.
In order to protect against data loss during abnormal termination, Logstash has a persistent queue feature which will store the message queue on disk. Persistent queues provide durability of data within Logstash.
logstash默认使用内存queue来缓冲事件消息,一旦进程重启则内存queue里的数据全部丢失;
好处
- Absorbs bursts of events without needing an external buffering mechanism like Redis or Apache Kafka.
- Provides an at-least-once delivery guarantee against message loss during a normal shutdown as well as when Logstash is terminated abnormally.
实现
The queue sits between the input and filter stages in the same process:
input → queue → filter + output
When an input has events ready to process, it writes them to the queue. When the write to the queue is successful, the input can send an acknowledgement to its data source.
When processing events from the queue, Logstash acknowledges events as completed, within the queue, only after filters and outputs have completed. The queue keeps a record of events that have been processed by the pipeline. An event is recorded as processed (in this document, called "acknowledged" or "ACKed") if, and only if, the event has been processed completely by the Logstash pipeline.
配置
queue.type: persisted
path.queue: "path/to/data/persistent_queue"
其他配置
queue.page_capacity
queue.drain
queue.max_events
queue.max_bytes
更进一步
First, the queue itself is a set of pages. There are two kinds of pages: head pages and tail pages. The head page is where new events are written. There is only one head page. When the head page is of a certain size (see queue.page_capacity), it becomes a tail page, and a new head page is created. Tail pages are immutable, and the head page is append-only. Second, the queue records details about itself (pages, acknowledgements, etc) in a separate file called a checkpoint file.
When recording a checkpoint, Logstash will:
Call fsync on the head page.
Atomically write to disk the current state of the queue.
The process of checkpointing is atomic, which means any update to the file is saved if successful.
If Logstash is terminated, or if there is a hardware-level failure, any data that is buffered in the persistent queue, but not yet checkpointed, is lost.
You can force Logstash to checkpoint more frequently by setting queue.checkpoint.writes. This setting specifies the maximum number of events that may be written to disk before forcing a checkpoint. The default is 1024. To ensure maximum durability and avoid losing data in the persistent queue, you can set queue.checkpoint.writes: 1 to force a checkpoint after each event is written. Keep in mind that disk writes have a resource cost. Setting this value to 1 can severely impact performance.
即使开启persistent queue,也有可能会有数据丢失,影响因素是flush间隔(checkpoint),默认是1024个事件flush一次,设置为1则每个事件flush一次,虽然不丢消息,但是对性能影响较大;
queue.checkpoint.writes: 1
2 Dead Letter Queues
By default, when Logstash encounters an event that it cannot process because the data contains a mapping error or some other issue, the Logstash pipeline either hangs or drops the unsuccessful event. In order to protect against data loss in this situation, you can configure Logstash to write unsuccessful events to a dead letter queue instead of dropping them.
Each event written to the dead letter queue includes the original event, along with metadata that describes the reason the event could not be processed, information about the plugin that wrote the event, and the timestamp for when the event entered the dead letter queue.
To process events in the dead letter queue, you simply create a Logstash pipeline configuration that uses the dead_letter_queue input plugin to read from the queue.
当logstash遇到无法处理的数据(mapping错误等),logstash要么卡住,要么丢掉不成功的事件;为了避免这种情况下的数据丢失,可以配置logstash将不成功的事件写到一个dead letter queue而不是直接丢掉;
使用限制
The dead letter queue feature is currently supported for the elasticsearch output only. Additionally, The dead letter queue is only used where the response code is either 400 or 404, both of which indicate an event that cannot be retried. Support for additional outputs will be available in future releases of the Logstash plugins. Before configuring Logstash to use this feature, refer to the output plugin documentation to verify that the plugin supports the dead letter queue feature.
目前dead letter queue只支持elasticsearch output;其他output将在未来支持;
配置
dead_letter_queue.enable: true
path.dead_letter_queue: "path/to/data/dead_letter_queue"
参考:
https://www.elastic.co/guide/en/logstash/current/resiliency.html
https://www.elastic.co/guide/en/logstash/current/persistent-queues.html
https://www.elastic.co/guide/en/logstash/current/dead-letter-queues.html
【原创】大数据基础之Logstash(4)高可用的更多相关文章
- 入门大数据---基于Zookeeper搭建Kafka高可用集群
一.Zookeeper集群搭建 为保证集群高可用,Zookeeper 集群的节点数最好是奇数,最少有三个节点,所以这里搭建一个三个节点的集群. 1.1 下载 & 解压 下载对应版本 Zooke ...
- 大数据 - hadoop - HDFS+Zookeeper实现高可用
高可用(Hign Availability,HA) 一.概念 作用:用于解决负载均衡和故障转移(Failover)问题. 问题描述:一个NameNode挂掉,如何启动另一个NameNode.怎样让两个 ...
- 入门大数据---基于Zookeeper搭建Spark高可用集群
一.集群规划 这里搭建一个 3 节点的 Spark 集群,其中三台主机上均部署 Worker 服务.同时为了保证高可用,除了在 hadoop001 上部署主 Master 服务外,还在 hadoop0 ...
- 【原创】大数据基础之Logstash(3)应用之http(in和out)
一个logstash很容易通过http打断成两个logstash实现跨服务器或者跨平台间数据同步,比如原来的流程是 logstash: nginx log -> kafka 打断成两个是 log ...
- 【原创】大数据基础之Logstash(1)简介、安装、使用
Logstash 6.6.2 官方:https://www.elastic.co/products/logstash 一 简介 Centralize, Transform & Stash Yo ...
- 【原创】大数据基础之Logstash(2)应用之mysql-kafka
应用一:mysql数据增量同步到kafka 1 准备mysql测试表 mysql> create table test_sync(id int not null auto_increment, ...
- 【原创】大数据基础之Logstash(5)监控
有两种方式来监控logstash: api ui(xpack) When you run Logstash, it automatically captures runtime metrics tha ...
- 【原创】大数据基础之Logstash(3)应用之file解析(grok/ruby/kv)
从nginx日志中进行url解析 /v1/test?param2=v2¶m3=v3&time=2019-03-18%2017%3A34%3A14->{'param1':' ...
- 【原创】大数据基础之Logstash(6)mongo input
logstash input插件之mongodb是第三方的,配置如下: input { mongodb { uri => 'mongodb://mongo_server:27017/db' pl ...
随机推荐
- [转帖]Windows Server 2016各种版本介绍
Windows Server 2016各种版本介绍 http://www.5sharing.com/js/zx/872.html windows server的版本 时间:2018-10-06 10: ...
- novaclient的api调用流程与开发
novaclient的api调用流程与开发 2015年07月05日 19:27:17 qiushanjushi 阅读数:3915 http://blog.csdn.net/tpiperatgod/ ...
- nginx 详细配置
Nginx全局变量 Nginx中有很多的全局变量,可以通过$变量名来使用.下面列举一些常用的全局变量: 变量 说明 boxClass 需要执行动画的元素的 变量 说明 $args 请求中的参数,如ww ...
- luogu P2194 HXY烧情侣
残忍的题面 我们来看这一道题,其实冗长的题目告诉我们一个核心——用tarjan tarjan是用来干什么呢?是用来求强连通分量(代码中指sc) 求出来又有什么用呢?每当我们求出一个强连通分量时,就去计 ...
- iPhone 系统刷机
1. 下载好固件(爱思 或者 pp助手) e.g. http://jailbreak.25pp.com/gujian/ 2. 将电脑与手机连接上,弹出iTunes软件即可 3. 长按手机电源键 关闭手 ...
- Heap Partition ZOJ - 3963(贪心)
ZOJ - 3963 贪心做一下就好了 反正别用memset #include <iostream> #include <cstdio> #include <sstrea ...
- 纠错式教学法对比鼓励式教学法 -----Lily、贝乐、英孚,乐加乐、剑桥国际、优学汇、北外青少
一.关于两种英语教学法的争议 在英语教学方面,主要有纠错式教学法(目前主要对应国内听说读写四位一体的教学法)和鼓励式教学法(目前对应国内听说为主的教学法),这两种教学方法其实是各有千秋,各有利弊的. ...
- C++ 左值与右值 右值引用 引用折叠 => 完美转发
左值与右值 什么是左值?什么是右值? 在C++里没有明确定义.看了几个版本,有名字的是左值,没名字的是右值.能被&取地址的是左值,不能被&取地址的是右值.而且左值与右值可以发生转换. ...
- 实现select联动效果,数据从后台获取
效果如下: 当type值选择完后,amount值会自动相应填入. 1. 从后台获取数据,为一个数组,里面包含多个对象. <select id="scholarshipTypeSelec ...
- java压缩文件或文件夹并导出
java压缩文件或文件夹并导出 tozipUtil: package com.zhl.push.Utils; import java.io.File; import java.io.FileInput ...