Lucene suggest [转]
In Lucene we have several different suggest implementations, under the suggest module; today I'm describing the new AnalyzingSuggester (to be committed soon; it should be available in 4.1).
To use it, you provide the set of suggest targets, which is the full set of strings and weights that may be suggested. The targets can come from anywhere; typically you'd process your query logs to create the targets, giving a higher weight to those queries that appear more frequently. If you sell movies you might use all movie titles with a weight according to sales popularity.
You also provide an analyzer, which is used to process each target into analyzed form. Under the hood, the analyzed form is indexed into an FST. At lookup time, the incoming query is processed by the same analyzer and the FST is searched for all completions sharing the analyzed form as a prefix.
Even though the matching is performed on the analyzed form, what's suggested is the original target (i.e., the unanalyzed input). Because Lucene has such a rich set of analyzer components, this can be used to create some useful suggesters:
- With an analyzer that folds or normalizes case, accents, etc. (e.g., using ICUFoldingFilter), the suggestions will match irrespective of case and accents. For example, the query "ame..." would suggest Amélie.
- With an analyzer that removes stopwords and normalizes case, the query "ghost..." would suggest "The Ghost of Christmas Past".
- Even graph TokenStreams, such as SynonymFilter, will work: in such cases we enumerate and index all analyzed paths into the FST. If the analyzer recognizes "wifi" and "wireless network" as synonyms, and you have the suggest target "wifi router" then the user query "wire..." would suggest "wifi router".
- Japanese suggesters may now be possible, with an analyzer that copies the reading (ReadingAttribute in the Kuromoji analyzer) as its output.
Given the diversity of analyzers, and the easy extensibility for applications to create their own analyzers, I'm sure there are many interesting use cases for this new AnalyzingSuggester: if you have an example please share with us on Lucene's user list (java-user@lucene.apache.org).
While this is a great step forward, there's still plenty to do with Lucene's suggesters. We need to allow for fuzzy matching on the query so we're more robust to typos (there's a rough prototype patch onLUCENE-3846). We need to predict based on only part of the query, instead of insisting on a full prefix match. There are a number of interesting elements to Google's autosuggest that we could draw inspiration from. As always, patches welcome!
FST
Essentially, an FST is a SortedMap<ByteSequence,SomeOutput>, if the arcs are in sorted order. With the right representation, it requires far less RAM than other SortedMap implementations, but has a higher CPU cost during lookup. The low memory footprint is vital for Lucene since an index can easily have many millions (sometimes, billions!) of unique terms.
There's a great deal of theory behind FSTs. They generally support the same operations asFSMs (determinize, minimize, union, intersect, etc.). You can also compose them, where the outputs of one FST are intersected with the inputs of the next, resulting in a new FST.
There are some nice general-purpose FST toolkits (OpenFst looks great) that support all these operations, but for Lucene I decided to implement this neat algorithm which incrementally builds up the minimal unweighted FST from pre-sorted inputs. This is a perfect fit for Lucene since we already store all our terms in sorted (unicode) order.
The resulting implementation (currently a patch on LUCENE-2792) is fast and memory efficient: it builds the 9.8 million terms in a 10 million Wikipedia index in ~8 seconds (on a fast computer), requiring less than 256 MB heap. The resulting FST is 69 MB. It can also build a prefix trie, pruning by how many terms come through each node, with even less memory.
Note that because addition is commutative, an FST with numeric outputs is not guaranteed to be minimal in my implementation; perhaps if I could generalize the algorithm to a weighted FST instead, which also stores a weight on each arc, that would yield the minimal FST. But I don't expect this will be a problem in practice for Lucene.
In the patch I modified the SimpleText codec, which was loading all terms into a TreeMap mapping the BytesRef term to an int docFreq and long filePointer, to use an FST instead, and all tests pass!
There are lots of other potential places in Lucene where we could use FSTs, since we often need map the index terms to "something". For example, the terms index maps to a long file position; the field cache maps to ordinals; the terms dictionary maps to codec-specific metadata, etc. We also have multi-term queries (eg Prefix, Wildcard, Fuzzy, Regexp) that need to test a large number of terms, that could work directly via intersection with the FST instead (many apps could easily fit their entire terms dict in RAM as an FST since the format is so compact). The FST could be used for a key/value store. Lots of fun things to try!
Lucene suggest [转]的更多相关文章
- lucene的suggest(搜索提示功能的实现)
1.首先引入依赖 <!-- https://mvnrepository.com/artifact/org.apache.lucene/lucene-suggest --> <!-- ...
- Solr Suggest组件的使用
使用suggest的原因,最主要就是相比于search速度快,In general, we need the autosuggest feature to satisfy two main requi ...
- elasticsearch suggest 的几种使用-completion 的基本 使用
在lucene里面,suggest 的支持非常完善,可以随心所欲的定制: 但是在es中使用起来就没有那么方便了. es给suggest 分类4类:term :phrase: completion: c ...
- Lucene 4.x Spellcheck使用说明
Spellcheck是Lucene新版本的功能,在介绍spellcheck之前,我们需要弄清楚Spellcheck支持几种数据源.Spellcheck构造函数需要传入Dictionary接口: pac ...
- 谈谈个人网站的建立(二)—— lucene的使用
首先,帮忙点击一下我的网站http://www.wenzhihuai.com/ .谢谢啊,如果可以,GitHub上麻烦给个star,以后面试能讲讲这个项目,GitHub地址https://github ...
- 学习笔记(二)--Lucene简介
Lucene简介 最受欢迎的java开源全文搜索引擎开发工具包.提供了完整的查询引擎和索引引擎,部分文本分词引擎(英文与德文两种西方语言).Lucene的目的是为软件开发人员提供一个简单易用的工具包, ...
- 在64位平台上的Lucene,应该使用MMapDirectory[转]
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html 从3.1版本开始,Lucene和Solr开始在64位的W ...
- 8 个基于 Lucene 的开源搜索引擎推荐
Lucene是一种功能强大且被广泛使用的搜索引擎,以下列出了8种基于Lucene的搜索引擎,你可以想象它们有多么强大. 1. Apache Solr Solr 是一个高性能,采用Java5开发,基于L ...
- Lucene系列二:Lucene(Lucene介绍、Lucene架构、Lucene集成)
一.Lucene介绍 1. Lucene简介 最受欢迎的java开源全文搜索引擎开发工具包.提供了完整的查询引擎和索引引擎,部分文本分词引擎(英文与德文两种西方语言).Lucene的目的是为软件开发人 ...
随机推荐
- CodeForces - 1101D:GCD Counting (树分治)
You are given a tree consisting of n vertices. A number is written on each vertex; the number on ver ...
- 2017java文本文件操作(读写操作)
java的读写操作是学java开发的必经之路,下面就来总结下java的读写操作. 从上图可以开出,java的读写操作(输入输出)可以用“流”这个概念来表示,总体而言,java的读写操作又分为两种:字符 ...
- Vue.js学习使用心得(二)——自定义指令
自定义指令 除了核心功能默认内置的指令,Vue 也允许注册自定义指令. 自定义指令可以定义全局指令,也可以定义局部指令. 使用 directives 选项来自定义指令. 定义全局指令: <div ...
- python函数完整语法和分类
函数初级 简介 # 函数是一系列代码的集合,用来完成某项特定的功能 优点 '''1. 避免代码的冗余2. 让程序代码结构更加清晰3. 让代码具有复用性,便于维护''' 函数四部分 '''1. 函数名: ...
- xdoj 1241--余神的rp机(区间dp)
xdoj 1241---余神的rp机 核
- EasyUI datagrid 格式 二
单击保存,改表的状态 { field: 'ck', checkbox: true }, $("tr").each(function () { if ($(this).find(&q ...
- 【git】如何向gitHub上推送自己的项目
一.在本地建立项目spring 二.在gitHub上创建spring仓库 三.在本地生成公私钥文件 命令:ssh-keygen -t rsa -C "shangxiaofei3@163.co ...
- POJ1417 True Liars
题意 Language:Default True Liars Time Limit: 1000MS Memory Limit: 10000K Total Submissions: 6392 Accep ...
- 梯度下降与pytorch
记得在tensorflow的入门里,介绍梯度下降算法的有效性时使用的例子求一个二次曲线的最小值. 这里使用pytorch复现如下: 1.手动计算导数,按照梯度下降计算 import torch #使用 ...
- nginx日志分割配置实例
Nginx没有类似Apache的cronolog日志分割处理的功能,但是,可以通过nginxNginx的信号控制功能利用脚本来实现日志的自动切割.请看下面的一个实例.Nginx对日志进行处理的脚本: ...