elasticsearch 基于 rollover 管理按时间递增的索引 合并 删除
https://www.elastic.co/cn/blog/managing-time-based-indices-efficiently
Anybody who uses Elasticsearch for indexing time-based data such as log events is accustomed to the index-per-day pattern: use an index name derived from the timestamp of the logging event rounded to the nearest day, and new indices pop into existence as soon as they are required. The definition of the new index can be controlled ahead of time using index templates.
This is an easy pattern to understand and implement, but it glosses over some of the complexities of index management such as the following:
- To achieve high ingest rates, you want to spread the shards from your active index over as many nodes as possible.
- For optimal search and low resource usage you want as few shards as possible, but not shards that are so big that they become unwieldy.
- An index per day makes it easy to expire old data, but how many shards do you need for one day?
- Is every day the same, or do you end up with too many shards one day and not enough the next?
In this blog post I’m going to introduce the new Rollover Pattern, and the APIs which support it, which is a simpler, more efficient way of managing time-based indices.
Rollover Pattern
The Rollover Pattern works as follows:
- There is one alias which is used for indexing and which points to the active index.
- Another alias points to active and inactive indices, and is used for searching.
- The active index can have as many shards as you have hot nodes, to take advantage of the indexing resources of all your expensive hardware.
- When the active index is too full or too old, it is rolled over: a new index is created and the indexing alias switches atomically from the old index to the new.
- The old index is moved to a cold node and is shrunk down to one shard, which can also be force-merged and compressed.
Getting started
Let’s assume we have a cluster with 10 hot
nodes and a pool of cold
nodes. Ideally, our active index (the one receiving all the writes) should have one shard on each hot node in order to split the indexing load over as many machines as possible.
We want to have one replica of each primary shard to ensure that we can tolerate the loss of a node without losing any data. This means that our active index should have 5 primary shards, giving us a total of 10 shards (or one per hot node). We could also use 10 primary shards (a total of 20 including replicas) and have two shards on each node.
First, we’ll create an index template for our active index:
PUT _template/active-logs
{
"template": "active-logs-*",
"settings": {
"number_of_shards": 5,
"number_of_replicas": 1,
"routing.allocation.include.box_type": "hot",
"routing.allocation.total_shards_per_node": 2
},
"aliases": {
"search-logs": {}
}
}
Indices created from this template will be allocated to nodes tagged with box_type: hot
, and the total_shards_per_node
setting will help to ensure that the shards are spread across as many hot
nodes as possible. I’ve set it to 2
instead of 1
, so that we can still allocate shards if a node fails.
We will use the active-logs
alias to index into the current active index, and the search-logs
alias to search across all log indices.
Here is the template which will be used by our inactive indices:
PUT _template/inactive-logs
{
"template": "inactive-logs-*",
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0,
"routing.allocation.include.box_type": "cold",
"codec": "best_compression"
}
}
Archived indices should be allocated to cold
nodes and should use deflate
compression to save space. I’ll explain why I’ve set replicas
to 0
later.
Now we can create our first active index:
PUT active-logs-1
PUT active-logs-1/_alias/active-logs
The -1
pattern in the name is recognised as a counter by the rollover API. We will use the active-logs
alias to index into the current active index, and the search-logs
alias to search across all log indices.
Indexing events
When we created the active-logs-1
index, we also created the active-logs
alias. From this point on, we should index using only the alias, and our documents will be sent to the current active index:
POST active-logs/log/_bulk
{ "create": {}}
{ "text": "Some log message", "@timestamp": "2016-07-01T01:00:00Z" }
{ "create": {}}
{ "text": "Some log message", "@timestamp": "2016-07-02T01:00:00Z" }
{ "create": {}}
{ "text": "Some log message", "@timestamp": "2016-07-03T01:00:00Z" }
{ "create": {}}
{ "text": "Some log message", "@timestamp": "2016-07-04T01:00:00Z" }
{ "create": {}}
{ "text": "Some log message", "@timestamp": "2016-07-05T01:00:00Z" }
Rolling over the index
At some stage, the active index is going to become too big or too old and you will want to replace it with a new empty index. The rollover API allows you to specify just how big or how old an index is allowed to be before it is rolled over.
How big is too big? As always, it depends. It depends on the hardware you have, the types of searches you perform, the performance you expect to see, how long you are willing to wait for a shard to recover, etc etc. In practice, you can try out different shard sizes and see what works for you. To start, choose some arbitrary number like 100 million or 1 billion. You can adjust this number up or down depending on search performance, data retention periods, and available space.
There is a hard limit on the number of documents that a single shard can contain: 2,147,483,519. If you plan to shrink your active index down to a single shard, you must have fewer than 2.1 billion documents in your active index. If you have more documents than can fit into a single shard, you can shrink your index to more than one shard, as long as the target number of shards is a factor of the original, e.g. 6 → 3, or 6 → 2.
Rolling an index over by age may be convenient as it allows you to parcel up your logs by hour, day, week, etc, but it is usually more efficient to base the rollover decision on the number of documents in the index. One of the benefits of size-based rollover is that all of your shards are of approximately the same weight, which makes them easier to balance.
The rollover API would be called regularly by a cron job to check whether the max_docs
or max_age
constraints have been breached. As soon as at least one constraint has been breached, the index is rolled over. Since we’ve only indexed 5 documents in the example above, we’ll specify a max_docs
value of 5
, and (for completeness), a max_age
of one week:
POST active-logs/_rollover
{
"conditions": {
"max_age": "7d",
"max_docs": 5
}
}
This request tells Elasticsearch to rollover the index pointed to by the active-logs
alias if that index either was created at least seven days ago, or contains at least 5 documents. The response looks like this:
{
"old_index": "active-logs-1",
"new_index": "active-logs-2",
"rolled_over": true,
"dry_run": false,
"conditions": {
"[max_docs: 5]": true,
"[max_age: 7d]": false
}
}
The active-logs-1
index has been rolled over to the active-logs-2
index because the max_docs: 5
constraint was met. This means that a new index called active-logs-2
has been created, based on the active-logs
template, and the active-logs
alias has been switched from active-logs-1
to active-logs-2
.
By the way, if you want to override any values from the index template such as settings
or mappings
, you can just pass them in to the _rollover
request body like you would with the create index API.
Why don't we support a max_size
constraint?
Given that the intention is to produce evenly sized shards, why don't we support a max_size
constraint in addition to max_docs
? The answer is that shard size is a less reliable measure because ongoing merges can produce significant temporary growth in shard size which disappears as soon as a merge is completed. Five primary shards, each in the process of merging to one 5GB shard, would temporarily increase the index size by 25GB! The doc count, on the other hand grows predictably.
Shrinking the index
Now that active-logs-1
is no longer being used for writes, we can move it off to the cold nodes and shrink it down to a single shard in a new index called `inactive-logs-1. Before shrinking, we have to:
- Make the index read-only.
- Move one copy of all shards to a single node. You can choose whichever node you like, probably whichever
cold
node has the most available space.
This can be achieved with the following request:
PUT active-logs-1/_settings
{
"index.blocks.write": true,
"index.routing.allocation.require._name": "some_node_name"
}
The allocation
setting ensures that at least one copy of each shard will be moved to the node with name some_node_name
. It won’t move ALL the shards — a replica shard can’t be allocated to the same node as its primary — but it will ensure that at least one primary or replica of each shard will move.
Once the index has finished relocating (use the cluster health API to check), issue the following request to shrink the index:
POST active-logs-1/_shrink/inactive-logs-1
This will kick off the shrink process. As long as your filesystem supports hard links, the shrink will be almost instantaneous. If your filesystem doesn’t support hard links, well, you’ll have to wait while all the segment files are copied from one index to another…
You can monitor the shrink process with the cat recovery API or with the cluster health API:
GET _cluster/health/inactive-logs-1?wait_for_status=yellow
As soon as it is done, you can remove the search-logs
alias from the old index and add it to the new:
POST _aliases
{
"actions": [
{
"remove": {
"index": "active-logs-1",
"alias": "search-logs"
}
},
{
"add": {
"index": "inactive-logs-1",
"alias": "search-logs"
}
}
]
}
Saving space
Our index has been reduced to a single shard, but it still contains the same number of segment files as before, and the best_compression
setting hasn’t kicked in yet because we haven’t made any writes. We can improve the situation by force-merging our single-shard index down to a single segment, as follows:
POST inactive-logs-1/_forcemerge?max_num_segments=1
This request will create a single new segment to replace the multiple segments that existed before. Also, because Elasticsearch has to write a new segment, the best_compression
setting will kick in and the segment will be written with deflate
compression.
There is no point in running a force-merge on both the primary and replica shard, which is why our template for inactive log indices set the number_of_replicas
to 0
. Now that force-merge has finished, we can increase the number of replicas to gain redundancy:
PUT inactive-logs-1/_settings
{ "number_of_replicas": 1 }
Once the replica has been allocated — use the cluster health API with ?wait_for_status=green
to check — we can be sure that we have a redundant copy of our data and we can safely delete the active-logs-1
index:
DELETE active-logs-1
Deleting old indices
With the old index-per-day pattern, it was easy to decide which old indices could be dropped. With the rollover pattern, it isn’t quite as obvious which index contains data from which time period.
Fortunately, the field stats API makes this easy to determine. We just need to look for all indices where the highest value in the @timestamp
field is older than our cutoff:
GET search-logs/_field_stats?level=indices
{
"fields": ["@timestamp"],
"index_constraints": {
"@timestamp": {
"max_value": {
"lt": "2016/07/03",
"format": "yyyy/MM/dd"
}
}
}
}
Any indices returned by the above request can be deleted.
Future enhancements
With the rollover, shrink, force-merge, and field-stats APIs, we have provided you with the primitives to manage time-based indices efficiently.
Still, there are a number of steps which could be automated to make life simpler. These steps are not easy to automate within Elasticsearch because we need to be able to inform somebody when things don’t go as planned. This is the role of a utility or application built on top of Elasticsearch.
Expect to see a work flow based on the above provided by the Curator index management tool and a nice UI in X-Pack which can take advantage of the scheduling and notification features to provide a simple, reliable, index management tool.
elasticsearch 基于 rollover 管理按时间递增的索引 合并 删除的更多相关文章
- ElasticSearch生命周期管理-索引策略配置与操作
概述 本文是在本人学习研究ElasticSearch的生命周期管理策略时,发现官方未提供中文文档,有的也是零零散散,此文主要是翻译官方文档Policy phases and actions模块. 注: ...
- Elasticsearch集群 管理
第7章 深入Elasticsearch集群 启动一个Elasticsearch节点时,该节点会开始寻找具有相同集群名字并且可见的主节点.如 果找到主节点,该节点加入一个已经组成了的集群:如果没有找到, ...
- elasticsearch 集群管理(集群规划、集群搭建、集群管理)
一.集群规划 搭建一个集群我们需要考虑如下几个问题: 1. 我们需要多大规模的集群? 2. 集群中的节点角色如何分配? 3. 如何避免脑裂问题? 4. 索引应该设置多少个分片? 5. 分片应该设置几个 ...
- 大规模Elasticsearch集群管理心得
转载:http://elasticsearch.cn/article/110 ElasticSearch目前在互联网公司主要用于两种应用场景,其一是用于构建业务的搜索功能模块且多是垂直领域的搜索,数据 ...
- Oracle 基于用户管理恢复的处理
================================ -- Oracle 基于用户管理恢复的处理 --================================ Oracle支持多种 ...
- 解析Linux内核的基本的模块管理与时间管理操作---超时处理【转】
转自:http://www.jb51.net/article/79960.htm 这篇文章主要介绍了Linux内核的基本的模块管理与时间管理操作,包括模块加载卸载函数的使用和定时器的用法等知识,需要的 ...
- elasticsearch集群管理工具head插件(转)
elasticsearch-head是一个elasticsearch的集群管理工具,它是完全由html5编写的独立网页程序,你可以通过插件把它集成到es 插件安装方法1: 1.elasticsearc ...
- Elasticsearch alias别名管理小结
Elasticsearch alias别名管理小结 By:授客 QQ:1033553122 建创测试数据 1 创建别名 2 移除别名 3 创建测试数据 4 批量操作 5 例1. 5 例2. 把多个索引 ...
- 基于jQuery发展历程时间轴特效代码
分享一款基于jQuery发展历程时间轴特效代码,带左右箭头,数字时间轴选项卡切换特效下载.效果图如下: 在线预览 源码下载 实现的代码. html代码: <div id="time ...
随机推荐
- Golang 模块(Module)官方手册
官方原文: https://github.com/golang/go/wiki/Modules Go 1.11包括此处建议的对版本模块的初步支持.模块是Go 1.11中的实验性加入功能,并计划纳入反馈 ...
- Android调用系统相机和相册并解决data为空,OOM,图片角度不对的问题
最近公司项目用到手机拍照的问题,好不容易在网上copy了一些代码,但是运行起来一大堆bug,先是三星手机上运行程序直接崩掉,debug了一下原来是onActivityResult中data返回为空,找 ...
- SpringCloud学习第三章-springcloud 父项目创建
父项目 pom.xml <?xml version="1.0" encoding="UTF-8"?> <project xmlns=" ...
- Spring入门。
程序的耦合和解耦. 1.问题引入. 在使用jdbc和数据库交互时.注册驱动:DriverManager.registerDriver(new com.mysql.cj.jdbc.Driver());如 ...
- Redis内存模型(1):内存统计及划分
1. 内存统计 查看命令:info memory 示例: 部分含义: used_memory: Redis分配器分配的内存总量(单位是字节),包括使用的虚拟内存. used_memory_rss: R ...
- eclipse自定义自动补全语句
1. Windows-->preferences 2. 弹出框选择, Java-->Editor-->Templates-->New 3. 弹出框输入, 1.Name--名字, ...
- yum无法下载,网关问题
由于网关地址改变没有及时更新配置,造成无法下载 failure: repodata/repomd.xml from base: [Errno 256] No more mirrors to try h ...
- 八、collection系列-----计数器、有序字典、默认字典、可命名元组、双向队列、单向队列一.计数器(对字典的扩展)
一.计数器(对字典的扩展) 有如下一个字典: dic = {'k1':123,'k2':123,'k3':12} 统计12出现的次数,123出现的次数 1.统计出现次数 >>> ...
- java static学习
原创,转载请注明来源sogeisetsu的博客园 static,在类里面定义公共的属性,它可以统一修改,并只占一个内存.从而达到方便修改和少占内存的目的 先放上代码,您可以先越过代码,看后面的讲解内容 ...
- Fluter基础巩固之Dart语言详解<三>
继续Dart语言的学习,这次过后下次就进入全新的Flutter的学习了,小小的激动.. 操作符重载: C++中也有,咱们来看一下在Dart中是如何来实现的: 比较简单. 异步[重要!]: async和 ...