我与solr(五)--关于schema.xml中的相关配置的详解
先把文件的代码贴上来:
<?xml version="1.0" encoding="UTF-8" ?>
<!--
版权说明。。.
-->
<!--
这是solr的chema 文件,这个文件应该被重命名为"schema.xml",而且他应该放在solrhome/core/conf文件下面。
获取你也能在solr webapp 的classload下面找到他.
更多的信息可以查看
http://wiki.apache.org/solr/SchemaXml 性能说明:可以如下来提高性能。
- 设置 stored="false" 对那些只需要搜索,无需返回的字段.
- 设置 indexed="false" 对于那些只用于返回无需进行搜索的字段.
- 删除所有不需要 copyfiled字段的声明
- 为了最好的索引大小与索引性能,设置所有一般的文本字段index=false,使用copyfile将他们copy到一个字段上,然后使用它进行搜索。
- 运行jvm服务器模式,并使用较高的日志级别,避免记录每一个请求。
--> <schema name="example-data-driven-schema" version="1.6">
<!-- 该配置名称与版本说明.
-->
<!-- 字段的有效属性:
- name:属性的名称,这里有个特殊的属性“_version_”是必须添加的。
- type:字段的数据结构类型,所用到的类型需要在fieldType中设置。
- default:默认值。
- indexed:是否创建索引
- stored:是否存储原始数据(如果不需要存储相应字段值,尽量设为false)
- docValues:表示此域是否需要添加一个 docValues 域,这对 facet 查询, group 分组,排序, function 查询有好处,尽管这个属性不是必须的,但他能加快索引数据加载,对 NRT 近实时搜索比较友好,且更节省内存,但它也有一些限制,比如当前docValues 域只支持 strField,UUIDField,Trie*Field 等域,且要求域的域值是单值不能是多值域
- solrMissingFirst/solrMissingLast:查询结果排序的过程中,如果发现这个字段的值不存在,则排在前面/后面,忽略排序的条件
- multValued:是否有多个值,比如说一个用户的所有好友id。(对可能存在多值的字段尽量设置为true,避免建索引时抛出错误)
- omitNorms:此属性若设置为 true ,即表示将忽略域值的长度标准化,忽略在索引过程中对当前域的权重设置,且会节省内存。只有全文本域或者你需要在索引创建过程中设置域的权重时才需要把这个值设为 false, 对于基本数据类型且不分词的域如intFeild,longField,Stre, 否则默认就是 false.
- required:添加文档时,该字段必须存在,类似mysql的not null
- termVectors: 设置为 true 即表示需要为该 field 存储项向量信息,当你需要MoreLikeThis 功能时,则需要将此属性值设为 true ,这样会带来一些性能提升。
- termPositions: 是否存储 Term 的起始位置信息,这会增大索引的体积,但高亮功能需要依赖此项设置,否则无法高亮
- termOffsets: 表示是否存储索引的位置偏移量,高亮功能需要此项配置,当你使用SpanQuery 时,此项配置会影响匹配的结果集
--> <!--字段名称应该包含字母数字或下划线字符,不以一个数字开始。这是目前没有严格执行,但其他字段名称将不会有来自所有组件的第一类支持和背部的兼容性没有保证。领导和的名字下划线(如_version_)保留。-->
<!--
在这data_driven_schema_configs configset,下面三个字段是必须的:
id、_version_,和_text_。所有其他字段都是可以删除修改的,并根据需要手动添加
在xml。
请注意,许多动态字段也被定义-您可以使用它们来指定一个
字段的类型通过字段命名约定-见下文。
警告:本_text_catch所字段将会显著地提高索引的大小。
如果你不需要,考虑删除它和相应的copyfield指令。-->
<field name="id" type="long" indexed="true" stored="true" required="true"/>
<!-- 常规字段->
<field name="informer_id" type="long" indexed="true" stored="false"/>
<field name="phone_number" type="string" indexed="true" stored="false"/> <field name="title" type="string" indexed="true" stored="true" />
<field name="content" type="string" indexed="true" stored="true" />
<field name="latitude" type="string" indexed="true" stored="true" />
<field name="longitude" type="string" indexed="true" stored="true" />
<field name="attachment" type="string" indexed="true" stored="true" /> <field name="clue_status" type="int" indexed="true" stored="true" />
<field name="del_flag" type="int" indexed="true" stored="true" />
<field name="gmt_create" type="date" indexed="true" stored="true" />
<field name="create_uid" type="long" indexed="true" stored="true" />
<field name="gmt_modified" type="date" indexed="true" stored="true" />
<field name="modified_uid" type="long" indexed="true" stored="true" />
<!--预留字段 -->
<!--<field name="id" type="string" indexed="true" stored="true" multiValued="false" />-->
<field name="_version_" type="long" indexed="true" stored="false"/>
<field name="_root_" type="string" indexed="true" stored="false" docValues="false" />
<field name="_text_" type="text_general" indexed="true" stored="false" multiValued="true"/>
<!--复制字段-->
<!--建议建立一个拷贝字段,将所有的 全文本 字段复制到一个字段中,以便进行统一的检索
要注意的是,如果你只是复制单个域,那么如果你被复制域本身就是多值域,那么目标域也是多值域,这毋庸置疑,那如果你复制的是多个域,只要其中有一个域是多值域,那么目标域就一定是多值域,这点一定要谨记
-->
<copyField source="*" dest="_text_"/>
<!--动态字段-->
<!-- 动态字段 属性配置上与常规字段没啥区别,最大的区别是name的属性上可以进行通配,比如说name="*_i",那么只要是后面带i的字段都是符合的。这样就不怕一些字段无法匹配无法写入 --> <dynamicField name="*_i" type="int" indexed="true" stored="true"/>
<dynamicField name="*_is" type="ints" indexed="true" stored="true"/>
<dynamicField name="*_s" type="string" indexed="true" stored="true" />
<dynamicField name="*_ss" type="strings" indexed="true" stored="true"/>
<dynamicField name="*_l" type="long" indexed="true" stored="true"/>
<dynamicField name="*_ls" type="longs" indexed="true" stored="true"/>
<dynamicField name="*_t" type="text_general" indexed="true" stored="true"/>
<dynamicField name="*_txt" type="text_general" indexed="true" stored="true"/>
<dynamicField name="*_b" type="boolean" indexed="true" stored="true"/>
<dynamicField name="*_bs" type="booleans" indexed="true" stored="true"/>
<dynamicField name="*_f" type="float" indexed="true" stored="true"/>
<dynamicField name="*_fs" type="floats" indexed="true" stored="true"/>
<dynamicField name="*_d" type="double" indexed="true" stored="true"/>
<dynamicField name="*_ds" type="doubles" indexed="true" stored="true"/> <!-- 字段类型 -->
<!--
- StrField: 这是一个不分词的字符串域,它支持 docValues 域,但当为其添加了docValues 域,则要求只能是单值域且该域必须存在或者该域有默认值
- BoolField : boolean 域,对应 true/false
- TrieIntField, TrieFloatField, TrieLongField, TrieDoubleField 这几个都是默认的数字域, precisionStep 属性一般用于数字范围查询, precisionStep 值越小,则索引时该域的域值分出的 token 个数越多,会增大硬盘上索引的体积,但它会加快数字范围检索的响应速度, positionIncrementGap 属性表示如果当前域是多值域时,多个值之间的间距,单值域,设置此项无意义。
- TrieDateField :显然这是一个日期域类型,不过遗憾的是它支持 1995-12-31T23:59:59Z 这种格式的日期,比较坑爹,为此我自定义了一个 TrieCNDateField 域类型,用于支持国人比较喜欢的 yyyy-MM-dd HH:mm:ss 格式的日期。源码请参见我的上一篇博客。
- BinaryField :经过 base64 编码的字符串域类型,即你需要把 binary 数据进行base64 编码才能被 solr 进行索引。
- RandomSortField :随机排序域类型,当你需要实现伪随机排序时,请使用此域类型。
- TextField :是用的最多的一种域类型,它需要进行分词,所以它一般需要配置分词器。至于具体它如何配置 IK 分词器,这里就不展开了-->
<fieldType name="string" class="solr.StrField" sortMissingLast="true" docValues="true" />
<fieldType name="strings" class="solr.StrField" sortMissingLast="true" multiValued="true" docValues="true" />
<fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/>
<fieldType name="booleans" class="solr.BoolField" sortMissingLast="true" multiValued="true"/>
<!-- sortMissingLast and sortMissingFirst attributes are optional attributes are
currently supported on types that are sorted internally as strings
and on numeric types.
This includes "string","boolean", and, as of 3.5 (and 4.x),
int, float, long, date, double, including the "Trie" variants.
- If sortMissingLast="true", then a sort on this field will cause documents
without the field to come after documents with the field,
regardless of the requested sort order (asc or desc).
- If sortMissingFirst="true", then a sort on this field will cause documents
without the field to come before documents with the field,
regardless of the requested sort order.
- If sortMissingLast="false" and sortMissingFirst="false" (the default),
then default lucene sorting will be used which places docs without the
field first in an ascending sort and last in a descending sort.
--> <!--
默认数值类型,用于范围类的查找, consider the tint/tfloat/tlong/tdouble types.
这些字段支持文档的值,但应该是单值字段.
-->
<fieldType name="int" class="solr.TrieIntField" docValues="true" precisionStep="0" positionIncrementGap="0"/>
<fieldType name="float" class="solr.TrieFloatField" docValues="true" precisionStep="0" positionIncrementGap="0"/>
<fieldType name="long" class="solr.TrieLongField" docValues="true" precisionStep="0" positionIncrementGap="0"/>
<fieldType name="double" class="solr.TrieDoubleField" docValues="true" precisionStep="0" positionIncrementGap="0"/> <fieldType name="ints" class="solr.TrieIntField" docValues="true" precisionStep="0" positionIncrementGap="0" multiValued="true"/>
<fieldType name="floats" class="solr.TrieFloatField" docValues="true" precisionStep="0" positionIncrementGap="0" multiValued="true"/>
<fieldType name="longs" class="solr.TrieLongField" docValues="true" precisionStep="0" positionIncrementGap="0" multiValued="true"/>
<fieldType name="doubles" class="solr.TrieDoubleField" docValues="true" precisionStep="0" positionIncrementGap="0" multiValued="true"/> <!--
各个精度值
-->
<fieldType name="tint" class="solr.TrieIntField" docValues="true" precisionStep="8" positionIncrementGap="0"/>
<fieldType name="tfloat" class="solr.TrieFloatField" docValues="true" precisionStep="8" positionIncrementGap="0"/>
<fieldType name="tlong" class="solr.TrieLongField" docValues="true" precisionStep="8" positionIncrementGap="0"/>
<fieldType name="tdouble" class="solr.TrieDoubleField" docValues="true" precisionStep="8" positionIncrementGap="0"/> <fieldType name="tints" class="solr.TrieIntField" docValues="true" precisionStep="8" positionIncrementGap="0" multiValued="true"/>
<fieldType name="tfloats" class="solr.TrieFloatField" docValues="true" precisionStep="8" positionIncrementGap="0" multiValued="true"/>
<fieldType name="tlongs" class="solr.TrieLongField" docValues="true" precisionStep="8" positionIncrementGap="0" multiValued="true"/>
<fieldType name="tdoubles" class="solr.TrieDoubleField" docValues="true" precisionStep="8" positionIncrementGap="0" multiValued="true"/> <!-- 日期格式
Note: -->
<fieldType name="date" class="solr.TrieDateField" docValues="true" precisionStep="0" positionIncrementGap="0"/>
<fieldType name="dates" class="solr.TrieDateField" docValues="true" precisionStep="0" positionIncrementGap="0" multiValued="true"/> <!-- 一种基于树结构的日期字段,日期范围查询与数据分类-->
<fieldType name="tdate" class="solr.TrieDateField" docValues="true" precisionStep="6" positionIncrementGap="0"/> <fieldType name="tdates" class="solr.TrieDateField" docValues="true" precisionStep="6" positionIncrementGap="0" multiValued="true"/> -->
<fieldType name="binary" class="solr.BinaryField"/> <!-- The "RandomSortField" is not used to store or search any
data. You can declare fields of this type it in your schema
to generate pseudo-random orderings of your docs for sorting
or function purposes. The ordering is generated based on the field
name and the version of the index. As long as the index version
remains unchanged, and the same field name is reused,
the ordering of the docs will be consistent.
If you want different psuedo-random orderings of documents,
for the same version of the index, use a dynamicField and
change the field name in the request.
-->
<fieldType name="random" class="solr.RandomSortField" indexed="true" /> <!-- solr.TextField allows the specification of custom text analyzers
specified as a tokenizer and a list of token filters. Different
analyzers may be specified for indexing and querying. The optional positionIncrementGap puts space between multiple fields of
this type on the same document, with the purpose of preventing false phrase
matching across fields. For more info on customizing your analyzer chain, please see
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
--> <!-- One can also specify an existing Analyzer class that has a
default constructor via the class attribute on the analyzer element.
Example:
<fieldType name="text_greek" class="solr.TextField">
<analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/>
</fieldType>
--> <!-- A text field that only splits on whitespace for exact matching of words -->
<dynamicField name="*_ws" type="text_ws" indexed="true" stored="true"/>
<fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
</analyzer>
</fieldType> <!-- A general text field that has reasonable, generic
cross-language defaults: it tokenizes with StandardTokenizer,
removes stop words from case-insensitive "stopwords.txt"
(empty by default), and down cases. At query time only, it
also applies synonyms.
-->
<fieldType name="text_general" class="solr.TextField" positionIncrementGap="100" multiValued="true">
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
<!-- in this example, we will only use synonyms at query time
<filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-->
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
<analyzer type="query">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
</fieldType> <!-- A text field with defaults appropriate for English: it
tokenizes with StandardTokenizer, removes English stop words
(lang/stopwords_en.txt), down cases, protects words from protwords.txt, and
finally applies Porter's stemming. The query time analyzer
also applies synonyms from synonyms.txt. -->
<dynamicField name="*_txt_en" type="text_en" indexed="true" stored="true"/>
<fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory"/>
<!-- in this example, we will only use synonyms at query time
<filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-->
<!-- Case insensitive stop word removal.
-->
<filter class="solr.StopFilterFactory"
ignoreCase="true"
words="lang/stopwords_en.txt"
/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.EnglishPossessiveFilterFactory"/>
<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
<!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
<filter class="solr.EnglishMinimalStemFilterFactory"/>
-->
<filter class="solr.PorterStemFilterFactory"/>
</analyzer>
<analyzer type="query">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
<filter class="solr.StopFilterFactory"
ignoreCase="true"
words="lang/stopwords_en.txt"
/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.EnglishPossessiveFilterFactory"/>
<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
<!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
<filter class="solr.EnglishMinimalStemFilterFactory"/>
-->
<filter class="solr.PorterStemFilterFactory"/>
</analyzer>
</fieldType> <!-- A text field with defaults appropriate for English, plus
aggressive word-splitting and autophrase features enabled.
This field is just like text_en, except it adds
WordDelimiterFilter to enable splitting and matching of
words on case-change, alpha numeric boundaries, and
non-alphanumeric chars. This means certain compound word
cases will work, for example query "wi fi" will match
document "WiFi" or "wi-fi".
-->
<dynamicField name="*_txt_en_split" type="text_en_splitting" indexed="true" stored="true"/>
<fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
<analyzer type="index">
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<!-- in this example, we will only use synonyms at query time
<filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
-->
<!-- Case insensitive stop word removal.
-->
<filter class="solr.StopFilterFactory"
ignoreCase="true"
words="lang/stopwords_en.txt"
/>
<filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
<filter class="solr.PorterStemFilterFactory"/>
</analyzer>
<analyzer type="query">
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
<filter class="solr.StopFilterFactory"
ignoreCase="true"
words="lang/stopwords_en.txt"
/>
<filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
<filter class="solr.PorterStemFilterFactory"/>
</analyzer>
</fieldType> <!-- Less flexible matching, but less false matches. Probably not ideal for product names,
but may be good for SKUs. Can insert dashes in the wrong place and still match. -->
<dynamicField name="*_txt_en_split_tight" type="text_en_splitting_tight" indexed="true" stored="true"/>
<fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
<analyzer>
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/>
<filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
<filter class="solr.EnglishMinimalStemFilterFactory"/>
<!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
possible with WordDelimiterFilter in conjuncton with stemming. -->
<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
</analyzer>
</fieldType> <!-- Just like text_general except it reverses the characters of
each token, to enable more efficient leading wildcard queries.
-->
<dynamicField name="*_txt_rev" type="text_general_rev" indexed="true" stored="true"/>
<fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100">
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.ReversedWildcardFilterFactory" withOriginal="true"
maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>
</analyzer>
<analyzer type="query">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
</fieldType> <dynamicField name="*_phon_en" type="phonetic_en" indexed="true" stored="true"/>
<fieldType name="phonetic_en" stored="false" indexed="true" class="solr.TextField" >
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.DoubleMetaphoneFilterFactory" inject="false"/>
</analyzer>
</fieldType> <!-- lowercases the entire field value, keeping it as a single token. -->
<dynamicField name="*_s_lower" type="lowercase" indexed="true" stored="true"/>
<fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer class="solr.KeywordTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory" />
</analyzer>
</fieldType> <!--
Example of using PathHierarchyTokenizerFactory at index time, so
queries for paths match documents at that path, or in descendent paths
-->
<dynamicField name="*_descendent_path" type="descendent_path" indexed="true" stored="true"/>
<fieldType name="descendent_path" class="solr.TextField">
<analyzer type="index">
<tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" />
</analyzer>
<analyzer type="query">
<tokenizer class="solr.KeywordTokenizerFactory" />
</analyzer>
</fieldType> <!--
Example of using PathHierarchyTokenizerFactory at query time, so
queries for paths match documents at that path, or in ancestor paths
-->
<dynamicField name="*_ancestor_path" type="ancestor_path" indexed="true" stored="true"/>
<fieldType name="ancestor_path" class="solr.TextField">
<analyzer type="index">
<tokenizer class="solr.KeywordTokenizerFactory" />
</analyzer>
<analyzer type="query">
<tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" />
</analyzer>
</fieldType> <!-- since fields of this type are by default not stored or indexed,
any data added to them will be ignored outright. -->
<fieldType name="ignored" stored="false" indexed="false" docValues="false" multiValued="true" class="solr.StrField" /> <!-- This point type indexes the coordinates as separate fields (subFields)
If subFieldType is defined, it references a type, and a dynamic field
definition is created matching *___<typename>. Alternately, if
subFieldSuffix is defined, that is used to create the subFields.
Example: if subFieldType="double", then the coordinates would be
indexed in fields myloc_0___double,myloc_1___double.
Example: if subFieldSuffix="_d" then the coordinates would be indexed
in fields myloc_0_d,myloc_1_d
The subFields are an implementation detail of the fieldType, and end
users normally should not need to know about them.
-->
<dynamicField name="*_point" type="point" indexed="true" stored="true"/>
<fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/> <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->
<fieldType name="location" class="solr.LatLonType" subFieldSuffix="_coordinate"/> <!-- An alternative geospatial field type new to Solr 4. It supports multiValued and polygon shapes.
For more information about this and other Spatial fields new to Solr 4, see:
http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
-->
<fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"
geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" /> <!-- Money/currency field type. See http://wiki.apache.org/solr/MoneyFieldType
Parameters:
defaultCurrency: Specifies the default currency if none specified. Defaults to "USD"
precisionStep: Specifies the precisionStep for the TrieLong field used for the amount
providerClass: Lets you plug in other exchange provider backend:
solr.FileExchangeRateProvider is the default and takes one parameter:
currencyConfig: name of an xml file holding exchange rates
solr.OpenExchangeRatesOrgProvider uses rates from openexchangerates.org:
ratesFileLocation: URL or path to rates JSON file (default latest.json on the web)
refreshInterval: Number of minutes between each rates fetch (default: 1440, min: 60)
-->
<fieldType name="currency" class="solr.CurrencyField" precisionStep="8" defaultCurrency="USD" currencyConfig="currency.xml" /> <!-- some examples for different languages (generally ordered by ISO code) -->
<!-- Armenian -->
<dynamicField name="*_txt_hy" type="text_hy" indexed="true" stored="true"/>
<fieldType name="text_hy" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hy.txt" />
<filter class="solr.SnowballPorterFilterFactory" language="Armenian"/>
</analyzer>
</fieldType> </schema>
我与solr(五)--关于schema.xml中的相关配置的详解的更多相关文章
- nginx.conf中关于nginx-rtmp-module配置指令详解
译序:截至 Jul 8th,2013 官方公布的最新 Nginx RTMP 模块 nginx-rtmp-module 指令详解.指令Corertmp语法:rtmp { ... }上下文:根描述:保存所 ...
- 并发编程(六)Object类中线程相关的方法详解
一.notify() 作用:唤醒一个正在等待该线程的锁的线程 PS : 唤醒的线程不会立即执行,它会与其他线程一起,争夺资源 /** * Object类的notify()和notifyAll()方法详 ...
- 【Struts】strust.xml中<result type="">所有类型详解
在默认时,<result>标签的type属性值是“dispatcher”(实际上就是转发,forward).开发人员可以根据自己的需要指定不同的类型,如redirect.stream等.如 ...
- web.xml中的<jsp-config>的用法详解
<jsp-config> 包括<taglib> 和<jsp-property-group> 两个子元素. 其中<taglib>元素在JSP 1.2时就已 ...
- Solr schema.xml中修改id的类型为int
使用solr6的版本的时候(solr5不存在这个问题),在修改schema.xml的field时,想使用int做为id的数据类型,修改后重新加载配置的时候报错.原来schema.xml中field i ...
- solr schema.xml文档节点配置
首先,讲解一下/usr/local/solr/collection1/conf/schema.xml的配置,此文档功能类似于配置索引数据库. Field:类似于数据库字段的属性(此文统一使用用“字段” ...
- Android中全局搜索(QuickSearchBox)详解
http://blog.csdn.net/mayingcai1987/article/details/6268732 1. 标题: 应用程序如何全面支持搜索 2. 引言: 如果想让某个应用程序支持全局 ...
- Spring中@Value标签的使用详解
1.@Value标签 由于Spring对通过IOC的方式对对象进行统一管理,所以对任何对象而言,其生成方法均由Spring管理.传统的方法是通过XML配置每一个Bean,并对这个Bean的所有Fiel ...
- Nmap在实战中的高级用法(详解)
@ 目录 Nmap在实战中的高级用法(详解) Nmap简单的扫描方式: 一.Nmap高级选项 1.查看本地路由与接口 2.指定网口与IP地址 3.定制探测包 二.Nmap扫描防火墙 1.SYN扫描 2 ...
随机推荐
- ubuntu用apt安装apache2时,出现E:未发现软件包 apache2
解决方法:使用sudo apt-get update更新软件包 更新软件包失败,多半使用因为源文件不干净,在/etc/apt下重新自己新写一份源文件 然后执行 sudo apt-get update
- ora-02429:无法删除用于强制唯一/主键的索引
今天打算删除orcale数据库中无用的表空间,发现报错,查资料删除,写个过程留着备用. 1.drop tablespace dldata INCLUDING CONTENTS CASCADE CONS ...
- $.grep(array, callback, [invert])过滤,常用
$.grep(array, callback, [invert])过滤,常用 解释: 使用过滤函数过滤数组元素.此函数至少传递两个参数(第三个参数为true或false,对过滤函数返回值取反,个人觉得 ...
- NTKO控件在阅读PDF时,显示DEMO的问题
NTKO控件在阅读PDF时,显示DEMO的问题, 原因是加载了以前的DEMO版本的控件.解决办法是: 在命令行中执行命令: regsvr32 /u NtkoOleDocAll.DLL 卸载老版本的控件 ...
- Basic Calculator
本博客介绍leetcode上的一道不难,但是比较经典的算法题. 题目如下: Implement a basic calculator to evaluate a simple expression s ...
- java环境配置步骤
1. jdk下载 官网:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html ...
- 关于Ajax load页面中js部分$(function(){})的执行顺序
<script type="text/javascript"> console.error(11111); $(function(){ console.error(22 ...
- Textrank算法介绍
先说一下自动文摘的方法.自动文摘(Automatic Summarization)的方法主要有两种:Extraction和Abstraction.其中Extraction是抽取式自动文摘方法,通过提取 ...
- Loaders
Android3.0之后引入了加载器,支持轻松在Activity和Fragment中异步加载数据.加载器具有以下特点: 1.可用于任何Activity和Fragment 2.支持异步加载数据 3.监控 ...
- Android studio下载依赖包很慢
build gradle文件 buildscript { repositories { //jcenter() maven { url 'http://maven.oschina.net/conten ...