Install pip if necessary

curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"
python get-pip.py

Install Curator for Elasticsearch

Elasticsearch Curator helps you curate, or manage, your Elasticsearch indices and snapshots by:

  • Obtaining the full list of indices (or snapshots) from the cluster, as the actionable list
  • Iterate through a list of user-defined filters to progressively remove indices (or snapshots) from this actionable list as needed.
  • Perform various actions on the items which remain in the actionable list.
pip install elasticsearch-curator
pip install click==6.7

Configure curator

mkdir -p /var/log/elastic
touch /var/log/elastic/curator.log
mkdir ~/.curator
vi ~/.curator/curator.yml
curator.yml
# Remember, leave a key empty if there is no value. None will be a string,
## not a Python "NoneType"
client:
hosts: [Elasticsearch Server IP]
port: 9200
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
ssl_no_validate: False
http_auth:
timeout: 30
master_only: False
 
logging:
loglevel: INFO
logfile: /var/log/elastic/curator.log
logformat: default
blacklist: ['elasticsearch', 'urllib3']

Have a test, now you can get the indices list
curator_cli show_indices

Create repository

Configure elasticseach.yml default in /etc/elasticsearch/elasticsearch.yml

elasticsearch.yml
path.repo:  /u01/elasticsearch/backup
http.max_header_size: 16kb

Restart elasticsearch service (service elasticsearch restart) to make the configurations work.

Create repository elasticsearch. Ensure location points to a valid path which is configured in path.repo, accesable from all nodes.

curl -XPUT http://localhost:9200/_snapshot/es_backup -H "Content-Type: application/json" -d @repository.json
repository.json
{
   "type""fs",
   "settings": {
      "compress"true,
      "location""/u01/elasticsearch/backup"
   }
}

Have a test

curl -XGET 'localhost:9200/_snapshot/_all?pretty=true'

Create curator yaml action files

daily_backup.yml

Customize the snapshot name in name option
action 1: backup all indices before today to repository elasticsearch with specified snapshot name
action 2: delete indices older than 185 days

daily_backup.yml
---
actions:
  1:
    action: snapshot
    description: >-
      Snapshot selected all indices to repository 'elasticsearch' with the snapshot name
    options:
      repository: es_backup
      name: '<c4cert-{now/d-1d}>'
      wait_for_completion: True
      max_wait: 4800
      wait_interval: 30
    filters:
    - filtertype: age
      source: name
      direction: older
      unit: days
      unit_count: 1
      timestring: "%Y.%m.%d"
 
 
  2:
    action: delete_indices
    description: >-
      Delete indices which is older than 185 days
    filters:
    - filtertype: age
      source: name
      direction: older
      unit: days
      unit_count: 185
      timestring: "%Y.%m.%d"

del_snapshot.yml
action 1: Delete snapshots from repository elasticsearch which is older than 185 days

del_snapshot.yml
---
 
actions:
  1:
    action: delete_snapshots
    description: >-
      Delete snapshots from repository which is older than 185 days
    options:
      repository: es_backup
      retry_interval: 120
      retry_count: 3
    filters:
    - filtertype: age
      source: creation_date
      direction: older
      unit: days
      unit_count: 185

restore.yml
action 1: Restore all indices in the most recent snapshot with state SUCCESS.

restore.yml
---
 
actions:
  1:
    action: restore
    description: >-
      Restore all indices in the most recent snapshot with state SUCCESS.  Wait
      for the restore to complete before continuing.  Do not skip the repository
      filesystem access check.  Use the other options to define the index/shard
      settings for the restore.
    options:
      repository: es_backup
      # If name is blank, the most recent snapshot by age will be selected
      name:
      # If indices is blank, all indices in the snapshot will be restored
      indices:
      wait_for_completion: True
      max_wait: 3600
      wait_interval: 10
    filters:
    - filtertype: state
      state: SUCCESS

Note: use --dry-run option to verify your action without any change. Find the dry run results in log path.
Curator --dry-run daily_backup.yml

Shell script and crontab

run.sh
#!/bin/sh
curator /u01/curator/del_snapshot.yml
curator /u01/curator/daily_backup.yml

crontab -e

Here configured the job run on every 3 AM

crontab
0 3 * * * /bin/sh /u01/curator/run.sh

Restore

Curator restore.yml

Tested OK in CERT env.

Some useful API 

# get all repositories
curl -XGET 'localhost:9200/_snapshot/_all?pretty=true'
 
# delete repository
curl -XDELETE 'localhost:9200/_snapshot/es-snapshot?pretty=true'
 
# show snapshots
curator_cli show_snapshots --repository es_backup
 
# show indices
curator_cli show_indices

Elasticsearch日志收集的更多相关文章

  1. Linux下单机部署ELK日志收集、分析环境

    一.ELK简介 ELK是elastic 公司旗下三款产品ElasticSearch .Logstash .Kibana的首字母组合,主要用于日志收集.分析与报表展示. ELK Stack包含:Elas ...

  2. 快速搭建应用服务日志收集系统(Filebeat + ElasticSearch + kibana)

    快速搭建应用服务日志收集系统(Filebeat + ElasticSearch + kibana) 概要说明 需求场景,系统环境是CentOS,多个应用部署在多台服务器上,平时查看应用日志及排查问题十 ...

  3. 日志收集之--将Kafka数据导入elasticsearch

    最近需要搭建一套日志监控平台,结合系统本身的特性总结一句话也就是:需要将Kafka中的数据导入到elasticsearch中.那么如何将Kafka中的数据导入到elasticsearch中去呢,总结起 ...

  4. GlusterFS + lagstash + elasticsearch + kibana 3 + redis日志收集存储系统部署 01

    因公司数据安全和分析的需要,故调研了一下 GlusterFS + lagstash + elasticsearch + kibana 3 + redis 整合在一起的日志管理应用: 安装,配置过程,使 ...

  5. 基于logstash+elasticsearch+kibana的日志收集分析方案(Windows)

    一 方案背景     通常,日志被分散的储存不同的设备上.如果你管理数十上百台服务器,你还在使用依次登录每台机器的传统方法查阅日志.这样是不是感觉很繁琐和效率低下.开源实时日志分析ELK平台能够完美的 ...

  6. 用ElasticSearch,LogStash,Kibana搭建实时日志收集系统

    用ElasticSearch,LogStash,Kibana搭建实时日志收集系统 介绍 这套系统,logstash负责收集处理日志文件内容存储到elasticsearch搜索引擎数据库中.kibana ...

  7. ELK(Elasticsearch + Logstash + Kibana) 日志收集

    单体应用或微服务的场景下,每个服务部署在不同的服务器上,需要对日志进行集重收集,然后统一查看所以日志. ELK日志收集流程: 1.微服务器上部署Logstash,对日志文件进行数据采集,将采集到的数据 ...

  8. logstash+elasticsearch+kibana搭建日志收集分析系统

    来源: http://blog.csdn.net/xifeijian/article/details/50829617 日志监控和分析在保障业务稳定运行时,起到了很重要的作用,不过一般情况下日志都分散 ...

  9. syslog+rsyslog+logstash+elasticsearch+kibana搭建日志收集

    最近rancher平台上docker日志收集捣腾挺久的,尤其在配置上,特写下记录 Unix/Linux系统中的大部分日志都是通过一种叫做syslog的机制产生和维护的.syslog是一种标准的协议,分 ...

随机推荐

  1. 偶尔遇到的“The request was aborted:Could not create SSL/TLS secure channel.”怎么解决?

    项目中涉及到调用第三方的Https的WebService,我使用的是原始的HttpWebRequest. 代码中已经考虑到是Https,加上了SSL3协议,加上了委托调用.但偶尔还是会碰到 The r ...

  2. dhtmlxtree动态加载节点数据的小随笔

    最近做了一个这个东西,颇有些感触,随笔记录一下自己的过程. 首先特别感谢:https://blog.csdn.net/cfl20121314/article/details/46852591,对我的帮 ...

  3. Android开发之拍照功能实现

    参考链接:http://www.linuxidc.com/Linux/2013-11/92892p3.htm 原文链接:http://blog.csdn.net/tangcheng_ok/articl ...

  4. gitlab变更邮箱后发送邮件报SSLError错误

    测试发送邮件: gitlab-rails console Notify.test_email('test666@example.com', 'Message Subject', 'Message Bo ...

  5. Linux文件压缩命令笔记

    1.gzip/gunzip gzip/gunzip:主要是进行单个文件的压缩和解压缩的命令. 示例:gzip hello.txt #执行压缩hello.txt ls hello.txt.gz #查看文 ...

  6. Dart 调用C语言混合编程

    Dart 调用C语言本篇博客研究Dart语言如何调用C语言代码混合编程,最后我们实现一个简单示例,在C语言中编写简单加解密函数,使用dart调用并传入字符串,返回加密结果,调用解密函数,恢复字符串内容 ...

  7. Django F查询Q查询Only与Defel

    F/Q查询 测试表 from django.db import models # Create your models here. class MyCharField(models.Field): d ...

  8. PAT_A1087#All Roads Lead to Rome

    Source: PAT A1087 All Roads Lead to Rome (30 分) Description: Indeed there are many different tourist ...

  9. [APIO2018]铁人两项 [圆方树模板]

    把这个图缩成圆方树,把方点的权值设成-1,圆点的权值设成点双的size,算 经过这个点的路径的数量*这个点的点权 的和即是答案. #include <iostream> #include ...

  10. 元素类型为 "session-factory" 的内容必须匹配 "(property*,mapping*,(class-cach....解决方法

    http://www.cnblogs.com/kisso143/p/3642057.html property必须写在mapping的上面.