Writing analyzers
There are times when you would like to analyze text in a bespoke fashion, either by configuring how one of Elasticsearch’s built-in analyzers works, or by combining analysis components together to build a custom analyzer.
The analysis chain
An analyzer is built of three components:
- 0 or more character filters
- exactly 1 tokenizer
- 0 or more token filters
Check out the Elasticsearch documentation on the Anatomy of an analyzer to understand more.
Specifying an analyzer on a field mapping
An analyzer can be specified on a text
datatype field mapping when creating a new field on a type, usually when creating the type mapping at index creation time, but also when adding a new field using the Put Mapping API.
Although you can add new types to an index, or add new fields to a type, you can’t add new analyzers or make changes to existing fields. If you were to do so, the data that has already been indexed would be incorrect and your searches would no longer work as expected.
When you need to make changes to existing fields, you should look at reindexing your data with the Reindex API
Here’s a simple example that specifies that the name
field in Elasticsearch, which maps to the Name
POCO property on the Project
type, uses the whitespace
analyzer at index time
var createIndexResponse = client.CreateIndex("my-index", c => c
.Mappings(m => m
.Map<Project>(mm => mm
.Properties(p => p
.Text(t => t
.Name(n => n.Name)
.Analyzer("whitespace")
)
)
)
)
);
Configuring a built-in analyzer
Several built-in analyzers can be configured to alter their behaviour. For example, the standard
analyzer can be configured to support a list of stop words with the stop word token filter it contains.
Configuring a built-in analyzer requires creating an analyzer based on the built-in one
var createIndexResponse = client.CreateIndex("my-index", c => c
.Settings(s => s
.Analysis(a => a
.Analyzers(aa => aa
.Standard("standard_english", sa => sa
.StopWords("_english_")
)
)
)
)
.Mappings(m => m
.Map<Project>(mm => mm
.Properties(p => p
.Text(t => t
.Name(n => n.Name)
.Analyzer("standard_english")
)
)
)
)
);
Pre-defined list of English stopwords within Elasticsearch |
|
Use the |
{
"settings": {
"analysis": {
"analyzer": {
"standard_english": {
"type": "standard",
"stopwords": [
"_english_"
]
}
}
}
},
"mappings": {
"project": {
"properties": {
"name": {
"type": "text",
"analyzer": "standard_english"
}
}
}
}
}
Creating a custom analyzer
A custom analyzer can be composed when none of the built-in analyzers fit your needs. A custom analyzer is built from the components that you saw in the analysis chain and a position increment gap, that determines the size of gap that Elasticsearch should insert between array elements, when a field can hold multiple values e.g. a List<string>
POCO property.
For this example, imagine we are indexing programming questions, where the question content is HTML and contains source code
public class Question
{
public int Id { get; set; }
public DateTimeOffset CreationDate { get; set; }
public int Score { get; set; }
public string Body { get; set; }
}
Based on our domain knowledge of programming languages, we would like to be able to search questions that contain "C#"
, but using the standard
analyzer, "C#"
will be analyzed and produce the token "c"
. This won’t work for our use case as there will be no way to distinguish questions about "C#"
from questions about another popular programming language, "C"
.
We can solve our issue with a custom analyzer
var createIndexResponse = client.CreateIndex("questions", c => c
.Settings(s => s
.Analysis(a => a
.CharFilters(cf => cf
.Mapping("programming_language", mca => mca
.Mappings(new []
{
"c# => csharp",
"C# => Csharp"
})
)
)
.Analyzers(an => an
.Custom("question", ca => ca
.CharFilters("html_strip", "programming_language")
.Tokenizer("standard")
.Filters("standard", "lowercase", "stop")
)
)
)
)
.Mappings(m => m
.Map<Question>(mm => mm
.AutoMap()
.Properties(p => p
.Text(t => t
.Name(n => n.Body)
.Analyzer("question")
)
)
)
)
);
Our custom question
analyzer will apply the following analysis to a question body
- strip HTML tags
- map both
C#
andc#
to"CSharp"
and"csharp"
, respectively (so the#
is not stripped by the tokenizer) - tokenize using the standard tokenizer
- filter tokens with the standard token filter
- lowercase tokens
- remove stop word tokens
A full text query will also apply the same analysis to the query input against the question body at search time, meaning when someone searches including the input "C#"
, it will also be analyzed and produce the token "csharp"
, matching a question body that contains "C#"
(as well as "csharp"
and case invariants), because the search time analysis applied is the same as the index time analysis.
Index and Search time analysis
With the previous example, we probably don’t want to apply the same analysis to the query input of a full text query against a question body; we know for our problem domain that a query input is not going to contain HTML tags, so we would like to apply different analysis at search time.
An analyzer can be specified when creating the field mapping to use at search time, in addition to an analyzer to use at query time
var createIndexResponse = client.CreateIndex("questions", c => c
.Settings(s => s
.Analysis(a => a
.CharFilters(cf => cf
.Mapping("programming_language", mca => mca
.Mappings(new[]
{
"c# => csharp",
"C# => Csharp"
})
)
)
.Analyzers(an => an
.Custom("index_question", ca => ca
.CharFilters("html_strip", "programming_language")
.Tokenizer("standard")
.Filters("standard", "lowercase", "stop")
)
.Custom("search_question", ca => ca
.CharFilters("programming_language")
.Tokenizer("standard")
.Filters("standard", "lowercase", "stop")
)
)
)
)
.Mappings(m => m
.Map<Question>(mm => mm
.AutoMap()
.Properties(p => p
.Text(t => t
.Name(n => n.Body)
.Analyzer("index_question")
.SearchAnalyzer("search_question")
)
)
)
)
);
Use an analyzer at index time that strips HTML tags |
|
Use an analyzer at search time that does not strip HTML tags |
With this in place, the text of a question body will be analyzed with the index_question
analyzer at index time and the input to a full text query on the question body field will be analyzed with the search_question
analyzer that does not use the html_strip
character filter.
A Search analyzer can also be specified per query i.e. use a different analyzer for a particular request from the one specified in the mapping. This can be useful when iterating on and improving your search strategy.
Take a look at the analyzer documentation for more details around where analyzers can be specified and the precedence for a given request.
Writing analyzers的更多相关文章
- Elasticsearch搜索资料汇总
Elasticsearch 简介 Elasticsearch(ES)是一个基于Lucene 构建的开源分布式搜索分析引擎,可以近实时的索引.检索数据.具备高可靠.易使用.社区活跃等特点,在全文检索.日 ...
- 4.3 Writing a Grammar
4.3 Writing a Grammar Grammars are capable of describing most, but not all, of the syntax of program ...
- Spring Enable annotation – writing a custom Enable annotation
原文地址:https://www.javacodegeeks.com/2015/04/spring-enable-annotation-writing-a-custom-enable-annotati ...
- Writing to a MySQL database from SSIS
Writing to a MySQL database from SSIS 出处 : http://blogs.msdn.com/b/mattm/archive/2009/01/07/writin ...
- Writing Clean Code 读后感
最近花了一些时间看了这本书,书名是 <Writing Clean Code ── Microsoft Techniques for Developing Bug-free C Programs& ...
- JMeter遇到的问题一:Error writing to server(转)
Java.io.IOException: Error writing to server异常:我测试500个并发时,系统没有问题:可当我把线程数加到800时,就出现错误了,在"查看结果树&q ...
- java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException
问题描述: 严重: IOException while loading persisted sessions: java.io.WriteAbortedException: writing abort ...
- Markdown syntax guide and writing on MWeb
Philosophy Markdown is intended to be as easy-to-read and easy-to-write as is feasible.Readability, ...
- 《Writing Idiomatic Python》前两部分的中文翻译
汇总了一下这本小书前两部分的内容: 翻译<Writing Idiomatic Python>(一):if语句.for循环 翻译<Writing Idiomatic Python> ...
随机推荐
- ColorMask
[ColorMask] When using multiple render target (MRT) rendering, it is possible to set up different co ...
- 编程, 细心永远都不嫌多(记录java连接数据库的一个错误)
最近在学习Java连接oracle数据库操作, 无意间一个小问题, 浪费了一个下午和半个晚上去找这个错误, 本来可以做更多的事情的, 现将这个错误贴出来, 每次看到, 定将勉励! .......... ...
- Python_06-函数与模块
1.获取当前路径 >>> import os >>> os.path() >>> os.getcwd() 'D:\\Python34' os.pa ...
- Visual Studio 2010 常用快捷方式
调试快捷键 F6: 生成解决方案 Ctrl+F6: 生成当前项目 F7: 查看代码 Shift+F7: 查看窗体设计器 F5: 启动调 ...
- catkin
catkin ros https://github.com/dirkholz/pcl_online_viewer rosrun ???
- [OS] 远程启动计划任务时以管理员身份运行
在Jenkins建了一个task自动启动Selenium的Grid,命令行是这样写的: schtasks /end /tn RestartGrid /s SZTEST201606 /u szdomai ...
- 初识STM32标准库
1.CMSIS 标准及库层次关系 CMSIS 标准中最主要的为 CMSIS 核心层,它包括了: STM32标准库可以从官网获得: 在使用库开发时,我们需要把 libraries 目录下的库函数文件添加 ...
- Mac完整卸载Android Studio的方法
1.卸载Android Studio,在终端(terminal)执行以下命令: rm -Rf /Applications/Android\ Studio.app rm -Rf ~/Library/Pr ...
- Android 上传文件到XP
Android部分: AsyncHttpClient client = new AsyncHttpClient(); RequestParams requestParams = new Request ...
- UVa 818Cutting Chains (暴力dfs+位运算+二进制法)
题意:有 n 个圆环,其中有一些已经扣在一起了,现在要打开尽量少的环,使所有的环可以组成一条链. 析:刚开始看的时候,确实是不会啊....现在有点思路,但是还是差一点,方法也不够好,最后还是参考了网上 ...