ELK-Elasticsearch 基础使用
一、基本概念
1 Node 与 Cluster
Elastic 本质上是一个分布式数据库,允许多台服务器协同工作,每台服务器可以运行多个 Elastic 实例。单个 Elastic 实例称为一个节点(node)。一组节点构成一个集群(cluster)。
2 Index
Elastic 会索引所有字段,经过处理后写入一个反向索引(Inverted Index)。查找数据的时候,直接查找该索引。
所以,Elastic 数据管理的顶层单位就叫做 Index(索引)。它是单个数据库的同义词。每个 Index (即数据库)的名字必须是小写
3 Document
Index 里面单条的记录称为 Document(文档)。许多条 Document 构成了一个 Index。
Document 使用 JSON 格式表示。同一个 Index 里面的 Document,不要求有相同的结构(scheme),但是最好保持相同,这样有利于提高搜索效率。
4 Type
Document 可以分组,比如weather这个 Index 里面,可以按城市分组(北京和上海),也可以按气候分组(晴天和雨天)。这种分组就叫做 Type,它是虚拟的逻辑分组,用来过滤 Document。
不同的 Type 应该有相似的结构(schema),举例来说,id字段不能在这个组是字符串,在另一个组是数值。这是与关系型数据库的表的一个区别。性质完全不同的数据(比如products和logs)应该存成两个 Index,而不是一个 Index 里面的两个 Type(虽然可以做到)。
根据规划,Elastic 6.x 版只允许每个 Index 包含一个 Type,7.x 版将会彻底移除 Type。
二、操作
1:查看ES版本信息
[elk@es logs]$ curl -GET 'http://localhost:9250'
{
"name" : "elk01",
"cluster_name" : "elk-cluster",
"cluster_uuid" : "KW6Nr_pTSVuwT0gR0agtOA",
"version" : {
"number" : "5.3.1",
"build_hash" : "5f9cf58",
"build_date" : "2017-04-17T15:52:53.846Z",
"build_snapshot" : false,
"lucene_version" : "6.4.2"
},
"tagline" : "You Know, for Search"
}
[elk@es logs]$
ES返回一个json文本信息,包括版本、当前节点、集群等等信息
默认情况下,Elastic 只允许本机访问,如果需要远程访问,可以修改 Elastic 安装目录的config/elasticsearch.yml文件,去掉network.host的注释,将它的值改
0.0.0.0,然后重新启动 Elastic。
上面代码中,设成0.0.0.0让任何人都可以访问。线上服务不要这样设置,要设成具体的 IP。
2:查看索引列表
[elk@es logs]$ curl -X GET 'http://localhost:9250/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open textindex r8Mj0h23TBO6uP6tBLGylQ 650b 650b
[elk@es logs]$
此处返回我刚创建的一个索引(创建步骤下面叙述),具体各个含义值,可以使用help查看
[elk@es logs]$ curl -X GET 'http://localhost:9250/_cat/indices?help'
health | h | current health status
status | s | open/close status
index | i,idx | index name
uuid | id,uuid | index uuid
pri | p,shards.primary,shardsPrimary | number of primary shards
rep | r,shards.replica,shardsReplica | number of replica shards
docs.count | dc,docsCount | available docs
docs.deleted | dd,docsDeleted | deleted docs
creation.date | cd | index creation date (millisecond value)
creation.date.string | cds | index creation date (as string)
store.size | ss,storeSize | store size of primaries & replicas
pri.store.size | | store size of primaries
completion.size | cs,completionSize | size of completion
pri.completion.size | | size of completion
fielddata.memory_size | fm,fielddataMemory | used fielddata cache
pri.fielddata.memory_size | | used fielddata cache
fielddata.evictions | fe,fielddataEvictions | fielddata evictions
pri.fielddata.evictions | | fielddata evictions
query_cache.memory_size | qcm,queryCacheMemory | used query cache
pri.query_cache.memory_size | | used query cache
query_cache.evictions | qce,queryCacheEvictions | query cache evictions
pri.query_cache.evictions | | query cache evictions
request_cache.memory_size | rcm,requestCacheMemory | used request cache
pri.request_cache.memory_size | | used request cache
request_cache.evictions | rce,requestCacheEvictions | request cache evictions
pri.request_cache.evictions | | request cache evictions
request_cache.hit_count | rchc,requestCacheHitCount | request cache hit count
pri.request_cache.hit_count | | request cache hit count
request_cache.miss_count | rcmc,requestCacheMissCount | request cache miss count
pri.request_cache.miss_count | | request cache miss count
flush.total | ft,flushTotal | number of flushes
pri.flush.total | | number of flushes
flush.total_time | ftt,flushTotalTime | time spent in flush
pri.flush.total_time | | time spent in flush
get.current | gc,getCurrent | number of current get ops
pri.get.current | | number of current get ops
get.time | gti,getTime | time spent in get
pri.get.time | | time spent in get
get.total | gto,getTotal | number of get ops
pri.get.total | | number of get ops
get.exists_time | geti,getExistsTime | time spent in successful gets
pri.get.exists_time | | time spent in successful gets
get.exists_total | geto,getExistsTotal | number of successful gets
pri.get.exists_total | | number of successful gets
get.missing_time | gmti,getMissingTime | time spent in failed gets
pri.get.missing_time | | time spent in failed gets
get.missing_total | gmto,getMissingTotal | number of failed gets
pri.get.missing_total | | number of failed gets
indexing.delete_current | idc,indexingDeleteCurrent | number of current deletions
pri.indexing.delete_current | | number of current deletions
indexing.delete_time | idti,indexingDeleteTime | time spent in deletions
pri.indexing.delete_time | | time spent in deletions
indexing.delete_total | idto,indexingDeleteTotal | number of delete ops
pri.indexing.delete_total | | number of delete ops
indexing.index_current | iic,indexingIndexCurrent | number of current indexing ops
pri.indexing.index_current | | number of current indexing ops
indexing.index_time | iiti,indexingIndexTime | time spent in indexing
pri.indexing.index_time | | time spent in indexing
indexing.index_total | iito,indexingIndexTotal | number of indexing ops
pri.indexing.index_total | | number of indexing ops
indexing.index_failed | iif,indexingIndexFailed | number of failed indexing ops
pri.indexing.index_failed | | number of failed indexing ops
merges.current | mc,mergesCurrent | number of current merges
pri.merges.current | | number of current merges
merges.current_docs | mcd,mergesCurrentDocs | number of current merging docs
pri.merges.current_docs | | number of current merging docs
merges.current_size | mcs,mergesCurrentSize | size of current merges
pri.merges.current_size | | size of current merges
merges.total | mt,mergesTotal | number of completed merge ops
pri.merges.total | | number of completed merge ops
merges.total_docs | mtd,mergesTotalDocs | docs merged
pri.merges.total_docs | | docs merged
merges.total_size | mts,mergesTotalSize | size merged
pri.merges.total_size | | size merged
merges.total_time | mtt,mergesTotalTime | time spent in merges
pri.merges.total_time | | time spent in merges
refresh.total | rto,refreshTotal | total refreshes
pri.refresh.total | | total refreshes
refresh.time | rti,refreshTime | time spent in refreshes
pri.refresh.time | | time spent in refreshes
refresh.listeners | rli,refreshListeners | number of pending refresh listeners
pri.refresh.listeners | | number of pending refresh listeners
search.fetch_current | sfc,searchFetchCurrent | current fetch phase ops
pri.search.fetch_current | | current fetch phase ops
search.fetch_time | sfti,searchFetchTime | time spent in fetch phase
pri.search.fetch_time | | time spent in fetch phase
search.fetch_total | sfto,searchFetchTotal | total fetch ops
pri.search.fetch_total | | total fetch ops
search.open_contexts | so,searchOpenContexts | open search contexts
pri.search.open_contexts | | open search contexts
search.query_current | sqc,searchQueryCurrent | current query phase ops
pri.search.query_current | | current query phase ops
search.query_time | sqti,searchQueryTime | time spent in query phase
pri.search.query_time | | time spent in query phase
search.query_total | sqto,searchQueryTotal | total query phase ops
pri.search.query_total | | total query phase ops
search.scroll_current | scc,searchScrollCurrent | open scroll contexts
pri.search.scroll_current | | open scroll contexts
search.scroll_time | scti,searchScrollTime | time scroll contexts held open
pri.search.scroll_time | | time scroll contexts held open
search.scroll_total | scto,searchScrollTotal | completed scroll contexts
pri.search.scroll_total | | completed scroll contexts
segments.count | sc,segmentsCount | number of segments
pri.segments.count | | number of segments
segments.memory | sm,segmentsMemory | memory used by segments
pri.segments.memory | | memory used by segments
segments.index_writer_memory | siwm,segmentsIndexWriterMemory | memory used by index writer
pri.segments.index_writer_memory | | memory used by index writer
segments.version_map_memory | svmm,segmentsVersionMapMemory | memory used by version map
pri.segments.version_map_memory | | memory used by version map
segments.fixed_bitset_memory | sfbm,fixedBitsetMemory | memory used by fixed bit sets for nested object field types and type filters for types referred in _parent fields
pri.segments.fixed_bitset_memory | | memory used by fixed bit sets for nested object field types and type filters for types referred in _parent fields
warmer.current | wc,warmerCurrent | current warmer ops
pri.warmer.current | | current warmer ops
warmer.total | wto,warmerTotal | total warmer ops
pri.warmer.total | | total warmer ops
warmer.total_time | wtt,warmerTotalTime | time spent in warmers
pri.warmer.total_time | | time spent in warmers
suggest.current | suc,suggestCurrent | number of current suggest ops
pri.suggest.current | | number of current suggest ops
suggest.time | suti,suggestTime | time spend in suggest
pri.suggest.time | | time spend in suggest
suggest.total | suto,suggestTotal | number of suggest ops
pri.suggest.total | | number of suggest ops
memory.total | tm,memoryTotal | total used memory
pri.memory.total | | total user memory
[elk@es logs]$
3:创建索引
[elk@es logs]$ curl -X PUT 'localhost:9250/abctest'
{"acknowledged":true,"shards_acknowledged":true}
[elk@es logs]$
创建索引名称,必须是小写,并且索引名称一旦创建,则不可以修改,acknowledged表示执行结果,true或false
4:删除索引
删除索引,只需要将PUT替换为DELETE即可
[elk@es logs]$ curl -X DELETE 'localhost:9250/abctest'
{"acknowledged":true}
[elk@es logs]$
ELK-Elasticsearch 基础使用的更多相关文章
- ELK(elasticsearch+kibana+logstash)搜索引擎(二): elasticsearch基础教程
1.elasticsearch的结构 首先elasticsearch目前的结构为 /index/type/id id对应的就是存储的文档ID,elasticsearch一般将数据以JSON格式存储. ...
- 使用ELK(Elasticsearch + Logstash + Kibana) 搭建日志集中分析平台实践--转载
原文地址:https://wsgzao.github.io/post/elk/ 另外可以参考:https://www.digitalocean.com/community/tutorials/how- ...
- ELk(Elasticsearch, Logstash, Kibana)的安装配置
目录 ELk(Elasticsearch, Logstash, Kibana)的安装配置 1. Elasticsearch的安装-官网 2. Kibana的安装配置-官网 3. Logstash的安装 ...
- Elasticsearch 基础入门
原文地址:Elasticsearch 基础入门 博客地址:http://www.extlight.com 一.什么是 ElasticSearch ElasticSearch是一个基于 Lucene 的 ...
- CentOS 6.x ELK(Elasticsearch+Logstash+Kibana)
CentOS 6.x ELK(Elasticsearch+Logstash+Kibana) 前言 Elasticsearch + Logstash + Kibana(ELK)是一套开源的日志管理方案, ...
- 基于CentOS6.5或Ubuntu14.04下Suricata里搭配安装 ELK (elasticsearch, logstash, kibana)(图文详解)
前期博客 基于CentOS6.5下Suricata(一款高性能的网络IDS.IPS和网络安全监控引擎)的搭建(图文详解)(博主推荐) 基于Ubuntu14.04下Suricata(一款高性能的网络ID ...
- ElasticSearch 基础 1
ElasticSearch 基础=============================== 索引创建 ========================== 1. RESTFUL APIAPI 基本 ...
- 键盘侠Linux干货| ELK(Elasticsearch + Logstash + Kibana) 搭建教程
前言 Elasticsearch + Logstash + Kibana(ELK)是一套开源的日志管理方案,分析网站的访问情况时我们一般会借助 Google / 百度 / CNZZ 等方式嵌入 JS ...
- Elasticsearch基础但非常有用的功能之二:模板
文章转载自: https://mp.weixin.qq.com/s?__biz=MzI2NDY1MTA3OQ==&mid=2247484584&idx=1&sn=accfb65 ...
- ELK 之一:ElasticSearch 基础和集群搭建
一:需求及基础: 场景: 1.开发人员不能登录线上服务器查看详细日志 2.各个系统都有日志,日志数据分散难以查找 3.日志数据量大,查询速度慢,或者数据不够实时 4.一个调用会涉及到多个系统,难以在这 ...
随机推荐
- 在条件判断中使用 all() / any()
在条件判断中使用 all() / any() all() 和 any() 两个函数非常适合在条件判断中使用.这两个函数接受一个可迭代对象,返回一个布尔值,其中: all(seq):仅当 seq 中所有 ...
- EasyDSS高性能RTMP、HLS(m3u8)、HTTP-FLV、RTSP流媒体服务器解决方案之Nodejs调用bat或sh脚本
关于EasyDSS流媒体服务器 EasyDSS商用流媒体服务器解决方案是一套集流媒体点播.转码与管理.直播.录像.检索.时移回看于一体的一套完整的商用流媒体服务器解决方案,EasyDSS高性能RTMP ...
- 深入Nginx之《常用参数配置技巧》
常见参配置实战技巧 下面会讲解实战中应该怎么配置更为合理. 1.user 默认是nobody,如果使用nobody,Nginx在运行过程中会出现很多操作没有权限,比如写硬盘.一般都是用低于root级别 ...
- Component 'TABCTL32.OCX'错误
1.Component 'TABCTL32.OCX'错误的处理方法 错误:Component 'TABCTL32.OCX' or one of its dependencies not correct ...
- mysql 安装为服务 ,mysql.zip 安装为服务,mysql搬移迁移服务器安装为服务
从服务器A打包到服务器B后,在服务器B中运行安装服务命令,可自定义服务名,一台服务器上可装N个MySql实例 mysqld --install MySQL_0001 --defaults-file=D ...
- 手撕面试官系列(三):微服务架构Dubbo+Spring Boot+Spring Cloud
文章首发于今日头条:https://www.toutiao.com/i6712696637623370248/ 直接进入主题 Dubbo (答案领取方式见侧边栏) Dubbo 中 中 zookeepe ...
- Delphi RSA加解密【 (RSA公钥加密,私钥解密)、(RSA私钥加密,公钥解密)、MD5加密、SHA加密】
作者QQ:(648437169) 点击下载➨delphi RSA加解密 [Delphi RSA加解密]支持 (RSA公钥加密,私钥解密).(RSA私钥加密,公钥解密).MD5加密.SHA1加密.SHA ...
- Linux下signal信号汇总
SIGHUP /* Hangup (POSIX). */ 终止进程 终端线路挂断 SIGINT /* Interrupt (ANSI). */ 终止进程 中断进程 Ctrl+C SIGQUIT /* ...
- 如何获取图片上传OSS后的缩略图 超简单
OSS是使用通过URL尾部的参数指定图片的缩放大小 图片路径后面拼接如下路径: ?x-oss-process=image/[处理类型],x_100,y_50[宽高等参数] ?x-oss-pro ...
- SSO实现机制
引言 单点登录有许多开发商提供解决方案,本文以yale大学SSO开源项目CAS为例,介绍单点登录实现机制. 术语解释 SSO-Single Sign On,单点登录 TGT-Ticket Granti ...