Elasticsearch之settings和mappings的意义

  简单的说,就是

  settings是修改分片和副本数的。

  mappings是修改字段和类型的。

  记住,可以用url方式来操作它们,也可以用java方式来操作它们。建议用url方式,因为简单很多。

1、ES中的settings

  查询索引库的settings信息

[hadoop@HadoopMaster elasticsearch-2.4.3]$ curl -XGET http://192.168.80.10:9200/zhouls/_settings?pretty
{
"zhouls" : {
"settings" : {
"index" : {
"creation_date" : "1488203759467",
"uuid" : "Sppm-db_Qm-OHptOC7vznw",
"number_of_replicas" : "1",
"number_of_shards" : "5",
"version" : {
"created" : "2040399"
}
}
}
}
}
[hadoop@HadoopMaster elasticsearch-2.4.3]$

settings修改索引库默认配置

  例如:分片数量,副本数量

  查看:curl -XGET http://192.168.80.10:9200/zhouls/_settings?pretty

  操作不存在索引:curl -XPUT '192.168.80.10:9200/liuch/' -d'{"settings":{"number_of_shards":3,"number_of_replicas":0}}'

  操作已存在索引:curl -XPUT '192.168.80.10:9200/zhouls/_settings' -d'{"index":{"number_of_replicas":1}}'

总结:就是,不存在索引时,可以指定副本和分片,如果已经存在,则只能修改副本。

  在创建新的索引库时,可以指定索引分片的副本数。默认是1,这个很简单

2、ES中的mappings

  ES的mapping如何用?什么时候需要手动,什么时候需要自动?

Mapping,就是对索引库中索引的字段名称及其数据类型进行定义,类似于mysql中的表结构信息。不过es的mapping比数据库灵活很多,它可以动态识别字段。一般不需要指定mapping都可以,因为es会自动根据数据格式识别它的类型,如果你需要对某些字段添加特殊属性(如:定义使用其它分词器、是否分词、是否存储等),就必须手动添加mapping

  我们在es中添加索引数据时不需要指定数据类型,es中有自动影射机制,字符串映射为string,数字映射为long。通过mappings可以指定数据类型是否存储等属性。

  查询索引库的mapping信息

[hadoop@HadoopMaster elasticsearch-2.4.3]$ curl -XGET http://192.168.80.10:9200/zhouls/emp/_mapping?pretty
{
"zhouls" : {
"mappings" : {
"emp" : {
"properties" : {
"name" : {
"type" : "string"
},
"score" : {
"type" : "long"
},
"type" : {
"type" : "string"
}
}
}
}
}
}
[hadoop@HadoopMaster elasticsearch-2.4.3]$

mappings修改字段相关属性

  例如:字段类型,使用哪种分词工具啊等,如下:

注意:下面可以使用indexAnalyzer定义分词器,也可以使用index_analyzer定义分词器

操作不存在的索引
  curl -XPUT '192.168.80.10:9200/zhouls' -d'{"mappings":{"emp":{"properties":{"name":{"type":"string","analyzer": "ik_max_word"}}}}}'
操作已存在的索引
  curl -XPOST http://192.168.80.10:9200/zhouls/emp/_mapping -d'{"properties":{"name":{"type":"string","analyzer": "ik_max_word"}}}'

  也许我上面这样写,很多人不太懂,我下面,就举个例子。(大家必须要会)

 

第一步:先编辑tvcount.json文件

  内容如下(做了笔记):

{
"settings":{    #settings是修改分片和副本数的
"number_of_shards":3,    #分片为3
"number_of_replicas":0    #副本数为0
},
"mappings":{    #mappings是修改字段和类型的
"tvcount":{
"dynamic":"strict",
"_all":{"enabled":false},
"properties":{
"tvname":{"type":"string","index":"analyzed","analyzer":"ik_max_word","search_analyzer": "ik_max_word"},
                  如,string类型,analyzed索引,ik_max_word分词器
"director":{"type":"string","index":"analyzed","analyzer":"ik_max_word","search_analyzer": "ik_max_word"},
"actor":{"type":"string","index":"analyzed","analyzer":"ik_max_word","search_analyzer": "ik_max_word"},
"allnumber":{"type":"string","index":"not_analyzed"},
"tvtype":{"type":"string","index":"analyzed","analyzer":"ik_max_word","search_analyzer": "ik_max_word"},
"description":{"type":"string","index":"analyzed","analyzer":"ik_max_word","search_analyzer": "ik_max_word"},
"pic":{"type":"string","index":"not_analyzed"}
}
}
}
}

  即,tvname(电视名称)    director(导演)       actor(主演)      allnumber(总播放量)

    tvtype(电视类别)    description(描述)




  

‘’

[hadoop@master elasticsearch-2.4.0]$ ll
total 52
drwxrwxr-x 2 hadoop hadoop 4096 Jul 6 20:25 bin
drwxrwxr-x 3 hadoop hadoop 4096 Jul 6 20:27 config
drwxrwxr-x 2 hadoop hadoop 4096 Apr 21 14:19 lib
-rw-rw-r-- 1 hadoop hadoop 11358 Aug 24 2016 LICENSE.txt
drwxrwxr-x 5 hadoop hadoop 4096 Aug 29 2016 modules
-rw-rw-r-- 1 hadoop hadoop 150 Aug 24 2016 NOTICE.txt
drwxrwxr-x 6 hadoop hadoop 4096 Jul 6 15:33 plugins
-rw-rw-r-- 1 hadoop hadoop 8700 Aug 24 2016 README.textile
-rw-rw-r-- 1 hadoop hadoop 195 Jul 1 12:18 requests
[hadoop@master elasticsearch-2.4.0]$ vim tvcount.json

{
"settings":{
"number_of_shards":3,
"number_of_replicas":0
},
"mappings":{
"tvcount":{
"dynamic":"strict",
"_all":{"enabled":false},
"properties":{
"tvname":{"type":"string","index":"analyzed","analyzer":"ik_max_word","search_analyzer": "ik_max_word"},
"director":{"type":"string","index":"analyzed","analyzer":"ik_max_word","search_analyzer": "ik_max_word"},
"actor":{"type":"string","index":"analyzed","analyzer":"ik_max_word","search_analyzer": "ik_max_word"},
"allnumber":{"type":"string","index":"not_analyzed"},
"tvtype":{"type":"string","index":"analyzed","analyzer":"ik_max_word","search_analyzer": "ik_max_word"},
"description":{"type":"string","index":"analyzed","analyzer":"ik_max_word","search_analyzer": "ik_max_word"},
"pic":{"type":"string","index":"not_analyzed"}
}
}
}
}

http://192.168.80.145:9200/_plugin/head/

 第二步:创建mapping

  这里,因为,之前,我们是在/home/hadoop/app/elasticsearch-2.4.0下,这个目录下有我们刚之前写的tvcount.json,所以可以直接

curl -XPOST 'http://master:9200/tv' -d @tvcount.json

  不然的话,就需要用绝对路径

[hadoop@master elasticsearch-2.4.0]$ pwd
/home/hadoop/app/elasticsearch-2.4.0
[hadoop@master elasticsearch-2.4.0]$ ll
total 56
drwxrwxr-x 2 hadoop hadoop 4096 Jul 6 20:25 bin
drwxrwxr-x 3 hadoop hadoop 4096 Jul 6 20:27 config
drwxrwxr-x 2 hadoop hadoop 4096 Apr 21 14:19 lib
-rw-rw-r-- 1 hadoop hadoop 11358 Aug 24 2016 LICENSE.txt
drwxrwxr-x 5 hadoop hadoop 4096 Aug 29 2016 modules
-rw-rw-r-- 1 hadoop hadoop 150 Aug 24 2016 NOTICE.txt
drwxrwxr-x 6 hadoop hadoop 4096 Jul 6 15:33 plugins
-rw-rw-r-- 1 hadoop hadoop 8700 Aug 24 2016 README.textile
-rw-rw-r-- 1 hadoop hadoop 195 Jul 1 12:18 requests
-rw-rw-r-- 1 hadoop hadoop 1022 Jul 6 22:27 tvcount.json
[hadoop@master elasticsearch-2.4.0]$ curl -XPOST 'http://master:9200/tv' -d @tvcount.json
{"acknowledged":true}[hadoop@master elasticsearch-2.4.0]$
[hadoop@master elasticsearch-2.4.0]$
[hadoop@master elasticsearch-2.4.0]$

  简单的说,就是

  settings是修改分片和副本数的。

  mappings是修改字段和类型的。

  具体,见我的博客

Elasticsearch之settings和mappings(图文详解)

  然后,再来查询下

[hadoop@master elasticsearch-2.4.0]$ pwd
/home/hadoop/app/elasticsearch-2.4.0
[hadoop@master elasticsearch-2.4.0]$ curl -XGET http://master:9200/tv/_settings?pretty
{
"tv" : {
"settings" : {
"index" : {
"creation_date" : "1499351407949",
"uuid" : "O30Uk9uRTlGLRVfbO26gUQ",
"number_of_replicas" : "0",
"number_of_shards" : "3",
"version" : {
"created" : "2040099"
}
}
}
}
}
[hadoop@master elasticsearch-2.4.0]$

  然后,再来查看mappingmappings是修改字段和类型的

[hadoop@master elasticsearch-2.4.0]$ pwd
/home/hadoop/app/elasticsearch-2.4.0
[hadoop@master elasticsearch-2.4.0]$ curl -XGET http://master:9200/tv/_mapping?pretty
{
"tv" : {
"mappings" : {
"tvcount" : {
"dynamic" : "strict",
"_all" : {
"enabled" : false
},
"properties" : {
"actor" : {
"type" : "string",
"analyzer" : "ik_max_word"
},
"allnumber" : {
"type" : "string",
"index" : "not_analyzed"
},
"description" : {
"type" : "string",
"analyzer" : "ik_max_word"
},
"director" : {
"type" : "string",
"analyzer" : "ik_max_word"
},
"pic" : {
"type" : "string",
"index" : "not_analyzed"
},
"tvname" : {
"type" : "string",
"analyzer" : "ik_max_word"
},
"tvtype" : {
"type" : "string",
"analyzer" : "ik_max_word"
}
}
}
}
}
}
[hadoop@master elasticsearch-2.4.0]$

  

  说简单点就是,tvcount.json里已经初步设置好了settings和mappings。

  然后启动hdfs、启动hbase

    这里,很简单,不多说。

[hadoop@master elasticsearch-2.4.]$ cd $HADOOP_HOME
[hadoop@master hadoop-2.6.]$ jps
Jps
QuorumPeerMain
Elasticsearch
[hadoop@master hadoop-2.6.]$ sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master]
master: starting namenode, logging to /home/hadoop/app/hadoop-2.6./logs/hadoop-hadoop-namenode-master.out
slave2: starting datanode, logging to /home/hadoop/app/hadoop-2.6./logs/hadoop-hadoop-datanode-slave2.out
slave1: starting datanode, logging to /home/hadoop/app/hadoop-2.6./logs/hadoop-hadoop-datanode-slave1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6./logs/hadoop-hadoop-secondarynamenode-master.out
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6./logs/yarn-hadoop-resourcemanager-master.out
slave2: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6./logs/yarn-hadoop-nodemanager-slave2.out
slave1: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6./logs/yarn-hadoop-nodemanager-slave1.out
[hadoop@master hadoop-2.6.]$ jps
ResourceManager
Jps
NameNode
QuorumPeerMain
SecondaryNameNode
Elasticsearch
[hadoop@master hadoop-2.6.]$

[hadoop@slave1 elasticsearch-2.4.]$ jps
Elasticsearch
QuorumPeerMain
NodeManager
DataNode
Jps
[hadoop@slave1 elasticsearch-2.4.]$

[hadoop@slave2 elasticsearch-2.4.]$ jps
NodeManager
Elasticsearch
Jps
DataNode
QuorumPeerMain
[hadoop@slave2 elasticsearch-2.4.]$

[hadoop@master hadoop-2.6.]$ cd $HBASE_HOME
[hadoop@master hbase]$ bin/start-hbase.sh
starting master, logging to /home/hadoop/app/hbase/logs/hbase-hadoop-master-master.out
slave2: regionserver running as process . Stop it first.
slave1: starting regionserver, logging to /home/hadoop/app/hbase/bin/../logs/hbase-hadoop-regionserver-slave1.out
[hadoop@master hbase]$ jps
ResourceManager
HMaster
NameNode
QuorumPeerMain
SecondaryNameNode
Elasticsearch
Jps
[hadoop@master hbase]$

[hadoop@slave1 hbase]$ jps
Jps
HRegionServer
Elasticsearch
QuorumPeerMain
NodeManager
DataNode
HMaster
[hadoop@slave1 hbase]$

[hadoop@slave2 hbase]$ jps
NodeManager
Elasticsearch
Jps
HMaster
DataNode
HRegionServer
QuorumPeerMain
[hadoop@slave2 hbase]$

  打开进入hbase shell

[hadoop@master hbase]$ bin/hbase shell
-- ::, INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.98.-hadoop2, r1e527e73bc539a04ba0fa4ed3c0a82c7e9dd7d15, Fri Apr :: PDT hbase(main)::>

  查询一下有哪些库

hbase(main)::> list
TABLE
-- ::, WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/app/hbase-0.98./lib/slf4j-log4j12-1.6..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.6./share/hadoop/common/lib/slf4j-log4j12-1.7..jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
row(s) in 266.1210 seconds => []

  

  如果tvcount数据库已经存在的话可以删除掉

hbase(main)::> disable 'tvcount'

ERROR: Table tvcount does not exist.

Here is some help for this command:
Start disable of named table:
hbase> disable 't1'
hbase> disable 'ns1:t1' hbase(main)::> drop 'tvcount' ERROR: Table tvcount does not exist. Here is some help for this command:
Drop the named table. Table must first be disabled:
hbase> drop 't1'
hbase> drop 'ns1:t1' hbase(main)::> list
TABLE
row(s) in 1.3770 seconds => []
hbase(main)::>

  然后,启动mysql数据库,创建数据库创建表

  进一步,可以见

http://www.cnblogs.com/zlslch/p/6746922.html

Elasticsearch之settings和mappings(图文详解)的更多相关文章

  1. 基于CentOS6.5或Ubuntu14.04下Suricata里搭配安装 ELK (elasticsearch, logstash, kibana)(图文详解)

    前期博客 基于CentOS6.5下Suricata(一款高性能的网络IDS.IPS和网络安全监控引擎)的搭建(图文详解)(博主推荐) 基于Ubuntu14.04下Suricata(一款高性能的网络ID ...

  2. Stamus Networks的产品SELKS(Suricata IDPS、Elasticsearch 、Logstash 、Kibana 和 Scirius )的下载和安装(带桌面版和不带桌面版)(图文详解)

    不多说,直接上干货!  SELKS是什么? SELKS 是Stamus Networks的产品,它是基于Debian的自启动运行发行,面向网络安全管理.它基于自己的图形规则管理器提供一套完整的.易于使 ...

  3. ElasticSearch实战系列八: Filebeat快速入门和使用---图文详解

    前言 本文主要介绍的是ELK日志系统中的Filebeat快速入门教程. ELK介绍 ELK是三个开源软件的缩写,分别表示:Elasticsearch , Logstash, Kibana , 它们都是 ...

  4. Elasticsearch之settings和mappings的意义

    Elasticsearch之settings和mappings(图文详解)   Elasticsearch之settings和mappings的意义 简单的说,就是 settings是修改分片和副本数 ...

  5. Windows 7操作系统下PHP 7的安装与配置(图文详解)

    前提博客 Windows 7操作系统下Apache的安装与配置(图文详解) 从官网下载           PHP的官网 http://www.php.net/         特意,新建这么一个目录 ...

  6. CentOS 6.3下Samba服务器的安装与配置方法(图文详解)

    这篇文章主要介绍了CentOS 6.3下Samba服务器的安装与配置方法(图文详解),需要的朋友可以参考下   一.简介  Samba是一个能让Linux系统应用Microsoft网络通讯协议的软件, ...

  7. 【图文详解】scrapy安装与真的快速上手——爬取豆瓣9分榜单

    写在开头 现在scrapy的安装教程都明显过时了,随便一搜都是要你安装一大堆的依赖,什么装python(如果别人连python都没装,为什么要学scrapy….)wisted, zope interf ...

  8. DELL R720服务器安装Windows Server 2008 R2 操作系统图文详解

    DELL R720服务器安装Windows Server 2008 R2 操作系统图文详解 说明:此文章中部分图片为网络搜集,所以不一定为DELL R720服务器安装界面,但可保证界面内容接近DELL ...

  9. Elasticsearch java api 基本搜索部分详解

    文档是结合几个博客整理出来的,内容大部分为转载内容.在使用过程中,对一些疑问点进行了整理与解析. Elasticsearch java api 基本搜索部分详解 ElasticSearch 常用的查询 ...

随机推荐

  1. e673. Getting Amount of Free Accelerated Image Memory

    Images in accelerated memory are much faster to draw on the screen. However, accelerated memory is t ...

  2. (转)引用---FFMPEG解码过程

    视频播放过程 首先简单介绍以下视频文件的相关知识.我们平时看到的视频文件有许多格式,比如 avi, mkv, rmvb, mov, mp4等等,这些被称为容器(Container), 不同的容器格式规 ...

  3. iOS10.0 & Swift 3.0 对于升级项目的建议

    iOS & Swift新旧版本更替, 在Apple WWDC大会开始之际, 也迎来了iOS 10.0, Swift 3.0 测试版, 到目前为止, 已经是测试版2.0, 每次更新都带来了新的语 ...

  4. iftop、ifstat详解

    ifstat 介绍 ifstat工具是个网络接口监测工具,比较简单看网络流量 实例 默认使用 #ifstat eth0 eth1 KB/s in KB/s out KB/s in KB/s out 0 ...

  5. Linux下oracle11g 导入导出操作详细

    //用dba匿名登录 [oracle@enfo212 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Wed Ma ...

  6. 第八章 示例代码(MyBatis)

    Sample Code JPetStore 6 is a full web application built on top of MyBatis 3, Spring 3 and Stripes. I ...

  7. TODO的用法

    visual studio提供//TODO标记,不过不会在右边标记处明显标识,需要你选择菜单栏的视图进行查看.方法如下: 1.首先在你还未完成的地方打上TODO标记,以下方式均可: 1)//TODO: ...

  8. mybatis由浅入深day01_8.2resultMap

    8.2 resultMap mybatis中使用resultMap完成高级输出结果映射. resultType可以指定pojo将查询结果映射为pojo,但需要pojo的属性名和sql查询的列名一致方可 ...

  9. laravel 集合接口

    只记下几个常用的,其他的看这里:http://laravelacademy.org/post/6293.html 1)什么是集合? 就是laravel查询构建器查询返回的数据结果(get first ...

  10. echo\awk\sed\tee\curl的使用-shell

    echo的使用:http://man.linuxde.net/echo awk的使用:http://man.linuxde.net/awk sed的使用:http://man.linuxde.net/ ...