Sphinx是由俄罗斯人Andrew Aksyonoff开发的一个可以结合MySQL,PostgreSQL全文检索引擎。意图为其他应用提供高速、低空间占用、高结果 相关度的全文搜索功能。是做站内全文搜索的一把利器。

sphinx已经出现很多年,并不是一个新鲜技术,但如今仍被广泛使用者。但由于IT技术的不断创新,在圈子中又出现了几款用于全文检索的新技术,如lucene就是一款与之媲美的工具,但相对而言,它的建立索引的速度却远远不如sphinx。次文不介绍sphinx的如何优越,主要介绍一下我在使用sphinx是的一些心得笔记,勿拍砖,希望给大家一个参考。

sphinx拥有丰富的学习材料,http://www.sphinxsearch.org/archives/80 这文档拥有详细的sphinx安装步骤。

本次使用操作系统为centos6.5,sphinx2.2版本。

Sphinx在mysql上的应用有两种方式:(下面这一段是摘抄的,勿拍砖)
①、采用API调用,如使用PHP、java等的API函数或方法查询。优点是可不必对mysql重新编译,服务端进程“低耦合”,且程序可灵活、方便的调用;
缺点是如已有搜索程序的条件下,需修改部分程序。推荐程序员使用。

②、使用插件方式(sphinxSE)把sphinx编译成一个mysql插件并使用特定的sql语句进行检索。其特点是,在sql端方便组合,且能直接返回数据给客户端。不必二次查询(注),在程序上仅需要修改对应的sql,但这对使用框架开发的程序很不方便,比如使用了ORM。另外还需要对mysql进行重新编译,且需要mysql-5.1以上版本支持插件存储。

系统管理员可使用这种方式sphinx在检索到结果后只能返回记录的ID,而非要查的sql数据,故需要重新根据这些ID再次从数据库中查询。

红色这段话是学习sphinx最为重要信息。通常为了快速开发,或者为了对已有项目进行优化,只能使用第一种方案。第二种插件方式,对于如今orm框架盛行的今天,可能选择的人会很少,对于我们.net程序员,更不会考虑。

那我们先看看Sphinx第一种方式的实现,第一种方式配置比较简单,根据中文手册可顺利配置完成,重点是配置完成的sphinx,在.net下我们该如何使用:

(1)sphinx connector.net

sphinx官网本身不支持c#.net的API这让我们改怎么活。不过后来伟大的人出现了,相继出现了sphinx connector.net为我们提供了连接sphinx服务器的接口,而且跟linq语法紧密结合,真的很不错,但是。。。它是付费的,不然搜索出的记录最多只能有5条。我们.net程序员本来工资就不高,放弃,继续寻找免费的大萝卜。。。。

(2)sphinx.client

网上还是有大牛仿照php的api源码,写出了sphinxClient的.net类库,调调就能用。站在巨人的肩膀上吹风,凉快。。。。

Sphinx在mysql中的使用,无非是为数据库各个字段提供更为快速的索引库,当mysql出现巨量数据的时候能够提供更为快速的秒搜能力,如果你还用传统sql like。。。。结果可想而知。

sphinx本身不支持中文字体的搜索,如果想实现中文的搜索必须使用字符表,或者实现中文分词。

(1)字符表,sphinx会对中文进行单字切分,进行字索引,速度慢点

字符表的实现很简单http://www.sphinxsearch.org/archives/80 已经实现。

(2)中文分词,使用分词插件如 coreseek,sfc,它里面有很好的算法,速度现对好点

coreseek至今还在不断更新,已经到了4版本,推荐使用,可参考资料:http://www.coreseek.cn/docs/coreseek_3.2-sphinx_0.9.9.html#installing 很详细,真好

当服务器配置完成后我们创建.net解决方案,实现sphinx服务器的通信;

  1. using System;
  2.  
  3. namespace phinxDemo
  4. {
  5. class Program
  6. {
  7. static void Main(string[] args)
  8. {
  9. SphinxClient sphinxClient = new SphinxClient("192.168.233.129", 9312);
  10. Console.WriteLine("请输入操作模式:[a]查询,[b]插入新数据,[c]删除数据");
  11. string inputstr = Console.ReadLine();
  12. while (true)
  13. {
  14.  
  15. switch (inputstr)
  16. {
  17. case "a":
  18. //查询数据
  19. Console.WriteLine("---------------查询数据---------------------");
  20. Console.WriteLine("请输入匹配的字符串");
  21. QueryData(Console.ReadLine(), sphinxClient);
  22. break;
  23. case "b":
  24. //插入数据
  25. Console.WriteLine("---------------插入数据---------------------");
  26. Console.WriteLine("请输入要插入的字符串");
  27. break;
  28. case "c":
  29. //删除数据
  30. Console.WriteLine("---------------删除数据---------------------");
  31. Console.WriteLine("请输入要删除的字符串");
  32. break;
  33. }
  34. }
  35. }
  36.  
  37. private static void QueryData(string p, SphinxClient sphinxClient)
  38. {
  39. var sphinxSearchResult = sphinxClient.Query(p);
  40. Console.WriteLine("此查询在服务器检索所得的匹配文档总数:{0}", sphinxSearchResult.total);
  41. Console.WriteLine("索引中匹配文档的总数:{0}", sphinxSearchResult.totalFound);
  42. Console.WriteLine("将查询关键字映射到一个包含关于关键字的统计数据的小hash表上:{0}", sphinxSearchResult.totalFound);
  43. Console.WriteLine("searchd报告的错误信息:{0}", sphinxSearchResult.error);
  44. Console.WriteLine("searchd报告的警告信息:{0}", sphinxSearchResult.warning);
  45. foreach (SphinxMatch match in sphinxSearchResult.matches)
  46. {
  47. Console.WriteLine("DocumentId {0} Weight {1}", match.docId, match.weight);
  48. }
  49. Console.ReadLine();
  50. }
  51. }
  52. }

  提供大牛的sphinxClient

  1. using System;
  2. using System.Collections;
  3. using System.Collections.Generic;
  4. using System.IO;
  5. using System.Linq;
  6. using System.Net.Sockets;
  7. using System.Text;
  8.  
  9. namespace phinxDemo
  10. {
  11.  
  12. public class SphinxClient : IDisposable
  13. {
  14. #region Static Values
  15. /* matching modes */
  16. public static int SPH_MATCH_ALL = 0;
  17. public static int SPH_MATCH_ANY = 1;
  18. public static int SPH_MATCH_PHRASE = 2;
  19. public static int SPH_MATCH_BOOLEAN = 3;
  20. public static int SPH_MATCH_EXTENDED = 4;
  21. public static int SPH_MATCH_FULLSCAN = 5;
  22. public static int SPH_MATCH_EXTENDED2 = 6;
  23.  
  24. /* sorting modes */
  25. public static int SPH_SORT_RELEVANCE = 0;
  26. public static int SPH_SORT_ATTR_DESC = 1;
  27. public static int SPH_SORT_ATTR_ASC = 2;
  28. public static int SPH_SORT_TIME_SEGMENTS = 3;
  29. public static int SPH_SORT_EXTENDED = 4;
  30. public static int SPH_SORT_EXPR = 5;
  31.  
  32. /* grouping functions */
  33. public static int SPH_GROUPBY_DAY = 0;
  34. public static int SPH_GROUPBY_WEEK = 1;
  35. public static int SPH_GROUPBY_MONTH = 2;
  36. public static int SPH_GROUPBY_YEAR = 3;
  37. public static int SPH_GROUPBY_ATTR = 4;
  38. public static int SPH_GROUPBY_ATTRPAIR = 5;
  39.  
  40. /* searchd reply status codes */
  41. public static int SEARCHD_OK = 0;
  42. public static int SEARCHD_ERROR = 1;
  43. public static int SEARCHD_RETRY = 2;
  44. public static int SEARCHD_WARNING = 3;
  45.  
  46. /* attribute types */
  47. public static int SPH_ATTR_INTEGER = 1;
  48. public static int SPH_ATTR_TIMESTAMP = 2;
  49. public static int SPH_ATTR_ORDINAL = 3;
  50. public static int SPH_ATTR_BOOL = 4;
  51. public static int SPH_ATTR_FLOAT = 5;
  52. public static int SPH_ATTR_BIGINT = 6;
  53. public static int SPH_ATTR_MULTI = 0x40000000;
  54.  
  55. /* searchd commands */
  56. private static int SEARCHD_COMMAND_SEARCH = 0;
  57. private static int SEARCHD_COMMAND_EXCERPT = 1;
  58. private static int SEARCHD_COMMAND_UPDATE = 2;
  59. private static int SEARCHD_COMMAND_KEYWORDS = 3;
  60. private static int SEARCHD_COMMAND_PERSIST = 4;
  61. private static int SEARCHD_COMMAND_STATUS = 5;
  62. private static int SEARCHD_COMMAND_QUERY = 6;
  63.  
  64. /* searchd command versions */
  65. private static int VER_COMMAND_SEARCH = 0x116;
  66. private static int VER_COMMAND_EXCERPT = 0x100;
  67. private static int VER_COMMAND_UPDATE = 0x102;
  68. private static int VER_COMMAND_KEYWORDS = 0x100;
  69. private static int VER_COMMAND_STATUS = 0x100;
  70. private static int VER_COMMAND_QUERY = 0x100;
  71.  
  72. /* filter types */
  73. private static int SPH_FILTER_VALUES = 0;
  74. private static int SPH_FILTER_RANGE = 1;
  75. private static int SPH_FILTER_FLOATRANGE = 2;
  76.  
  77. private static int SPH_CLIENT_TIMEOUT_MILLISEC = 0;
  78.  
  79. #endregion
  80.  
  81. #region Variable Declaration
  82.  
  83. private string _host;
  84. private int _port;
  85. private int _offset;
  86. private int _limit;
  87. private int _mode;
  88. private int[] _weights;
  89. private int _sort;
  90. private string _sortby;
  91. private long _minId;
  92. private long _maxId;
  93. private int _filterCount;
  94. private string _groupBy;
  95. private int _groupFunc;
  96. private string _groupSort;
  97. private string _groupDistinct;
  98. private int _maxMatches;
  99. private int _cutoff;
  100. private int _retrycount;
  101. private int _retrydelay;
  102. private string _latitudeAttr;
  103. private string _longitudeAttr;
  104. private float _latitude;
  105. private float _longitude;
  106. private string _error;
  107. private string _warning;
  108. private Dictionary<string, int> _fieldWeights;
  109.  
  110. private TcpClient _conn;
  111.  
  112. // request queries already created
  113. List<byte[]> _requestQueries = new List<byte[]>();
  114.  
  115. private Dictionary<string, int> _indexWeights;
  116.  
  117. // use a memorystream instead of a byte array because it's easier to augment
  118. MemoryStream _filterStreamData = new MemoryStream();
  119.  
  120. #endregion
  121.  
  122. #region Constructors
  123.  
  124. /**
  125. * Creates new SphinxClient instance.
  126. *
  127. * Default host and port that the instance will connect to are
  128. * localhost:3312. That can be overriden using {@link #SetServer SetServer()}.
  129. */
  130. public SphinxClient()
  131. : this(AppConfig.GetSetting("SphinxServer"), AppConfig.GetRequiredSettingAsInt("SphinxServerPort"))
  132. {
  133. }
  134.  
  135. /**
  136. * Creates new SphinxClient instance, with host:port specification.
  137. *
  138. * Host and port can be later overriden using {@link #SetServer SetServer()}.
  139. *
  140. * @param host searchd host name (default: localhost)
  141. * @param port searchd port number (default: 3312)
  142. */
  143. public SphinxClient(string host, int port)
  144. {
  145. _host = host;
  146. _port = port;
  147. _offset = 0;
  148. _limit = 20;
  149. _mode = SPH_MATCH_ALL;
  150. _sort = SPH_SORT_RELEVANCE;
  151. _sortby = "";
  152. _minId = 0;
  153. _maxId = 0;
  154.  
  155. _filterCount = 0;
  156.  
  157. _groupBy = "";
  158. _groupFunc = SPH_GROUPBY_DAY;
  159. // _groupSort = "@group desc";
  160. _groupSort = "";
  161. _groupDistinct = "";
  162.  
  163. _maxMatches = 1000;
  164. _cutoff = 0;
  165. _retrycount = 0;
  166. _retrydelay = 0;
  167.  
  168. _latitudeAttr = null;
  169. _longitudeAttr = null;
  170. _latitude = 0;
  171. _longitude = 0;
  172.  
  173. _error = "";
  174. _warning = "";
  175.  
  176. //_reqs = new ArrayList();
  177. _weights = null;
  178. _indexWeights = new Dictionary<string, int>();
  179. _fieldWeights = new Dictionary<string, int>();
  180.  
  181. }
  182.  
  183. #endregion
  184.  
  185. #region Main Functions
  186. /** Connect to searchd server and run current search query against all indexes (syntax sugar). */
  187. public SphinxResult Query(string query)
  188. {
  189. return Query(query, "*");
  190. }
  191.  
  192. /**
  193. * Connect to searchd server and run current search query.
  194. *
  195. * @param query query string
  196. * @param index index name(s) to query. May contain anything-separated
  197. * list of index names, or "*" which means to query all indexes.
  198. * @return {@link SphinxResult} object
  199. *
  200. * @throws SphinxException on invalid parameters
  201. */
  202. public SphinxResult Query(string query, string index)
  203. {
  204. //MyAssert(_requestQueries == null || _requestQueries.Count == 0, "AddQuery() and Query() can not be combined; use RunQueries() instead");
  205.  
  206. AddQuery(query, index);
  207. SphinxResult[] results = RunQueries();
  208. if (results == null || results.Length < 1)
  209. {
  210. return null; /* probably network error; error message should be already filled */
  211. }
  212.  
  213. SphinxResult res = results[0];
  214. _warning = res.warning;
  215. _error = res.error;
  216. return res;
  217. }
  218.  
  219. public int AddQuery(string query, string index)
  220. {
  221. byte[] outputdata = new byte[2048];
  222.  
  223. /* build request */
  224. try
  225. {
  226. MemoryStream ms = new MemoryStream();
  227. BinaryWriter sw = new BinaryWriter(ms);
  228.  
  229. WriteToStream(sw, _offset);
  230. WriteToStream(sw, _limit);
  231. WriteToStream(sw, _mode);
  232. WriteToStream(sw, 0); //SPH_RANK_PROXIMITY_BM25
  233. WriteToStream(sw, _sort);
  234.  
  235. WriteToStream(sw, _sortby);
  236. WriteToStream(sw, query);
  237.  
  238. //_weights = new int[] { 100, 1 };
  239. _weights = null;
  240.  
  241. int weightLen = _weights != null ? _weights.Length : 0;
  242.  
  243. WriteToStream(sw, weightLen);
  244. if (_weights != null)
  245. {
  246. for (int i = 0; i < _weights.Length; i++)
  247. WriteToStream(sw, _weights[i]);
  248. }
  249.  
  250. WriteToStream(sw, index);
  251. WriteToStream(sw, 1); // id64
  252. WriteToStream(sw, _minId);
  253. WriteToStream(sw, _maxId);
  254.  
  255. /* filters */
  256. WriteToStream(sw, _filterCount);
  257. if (_filterCount > 0 && _filterStreamData.Length > 0)
  258. {
  259. byte[] filterdata = new byte[_filterStreamData.Length];
  260. _filterStreamData.Seek(0, SeekOrigin.Begin);
  261. _filterStreamData.Read(filterdata, 0, (int)_filterStreamData.Length);
  262. WriteToStream(sw, filterdata);
  263. }
  264.  
  265. /* group-by, max matches, sort-by-group flag */
  266. WriteToStream(sw, _groupFunc);
  267. WriteToStream(sw, _groupBy);
  268. WriteToStream(sw, _maxMatches);
  269. WriteToStream(sw, _groupSort);
  270.  
  271. WriteToStream(sw, _cutoff);
  272. WriteToStream(sw, _retrycount);
  273. WriteToStream(sw, _retrydelay);
  274.  
  275. WriteToStream(sw, _groupDistinct);
  276.  
  277. /* anchor point */
  278. if (_latitudeAttr == null || _latitudeAttr.Length == 0 || _longitudeAttr == null || _longitudeAttr.Length == 0)
  279. {
  280. WriteToStream(sw, 0);
  281. }
  282. else
  283. {
  284. WriteToStream(sw, 1);
  285. WriteToStream(sw, _latitudeAttr);
  286. WriteToStream(sw, _longitudeAttr);
  287. WriteToStream(sw, _latitude);
  288. WriteToStream(sw, _longitude);
  289. }
  290.  
  291. /* per-index weights */
  292. //sw.Write(_indexWeights.size());
  293. WriteToStream(sw, this._indexWeights.Count);
  294. foreach (KeyValuePair<string, int> item in this._indexWeights)
  295. {
  296. WriteToStream(sw, item.Key);
  297. WriteToStream(sw, item.Value);
  298. }
  299.  
  300. // max query time
  301. WriteToStream(sw, 0);
  302. // per-field weights
  303. WriteToStream(sw, this._fieldWeights.Count);
  304. foreach (KeyValuePair<string, int> item in this._fieldWeights)
  305. {
  306. WriteToStream(sw, item.Key);
  307. WriteToStream(sw, item.Value);
  308. }
  309. // comment
  310. WriteToStream(sw, "");
  311. // attribute overrides
  312. WriteToStream(sw, 0);
  313. // select-list
  314. WriteToStream(sw, "*");
  315. sw.Flush();
  316. ms.Seek(0, SeekOrigin.Begin);
  317.  
  318. byte[] data = new byte[ms.Length];
  319. ms.Read(data, 0, (int)ms.Length);
  320.  
  321. int qIndex = _requestQueries.Count;
  322. _requestQueries.Add(data);
  323.  
  324. return qIndex;
  325.  
  326. }
  327. catch (Exception ex)
  328. {
  329. //MyAssert(false, "error on AddQuery: " + ex.Message);
  330. }
  331. return -1;
  332. }
  333.  
  334. /** Run all previously added search queries. */
  335. public SphinxResult[] RunQueries()
  336. {
  337. if (_requestQueries == null || _requestQueries.Count < 1)
  338. {
  339. _error = "no queries defined, issue AddQuery() first";
  340. return null;
  341. }
  342.  
  343. if (Conn == null) return null;
  344.  
  345. MemoryStream ms = new MemoryStream();
  346. BinaryWriter bw = new BinaryWriter(ms);
  347.  
  348. /* send query, get response */
  349. int nreqs = _requestQueries.Count;
  350. try
  351. {
  352.  
  353. WriteToStream(bw, (short)SEARCHD_COMMAND_SEARCH);
  354. WriteToStream(bw, (short)VER_COMMAND_SEARCH);
  355.  
  356. //return null;
  357. int rqLen = 4;
  358. for (int i = 0; i < nreqs; i++)
  359. {
  360. byte[] subRq = (byte[])_requestQueries[i];
  361. rqLen += subRq.Length;
  362. }
  363. WriteToStream(bw, rqLen);
  364. WriteToStream(bw, nreqs);
  365.  
  366. for (int i = 0; i < nreqs; i++)
  367. {
  368. byte[] subRq = (byte[])_requestQueries[i];
  369. WriteToStream(bw, subRq);
  370. }
  371. ms.Flush();
  372. byte[] buffer = new byte[ms.Length];
  373. ms.Seek(0, SeekOrigin.Begin);
  374. ms.Read(buffer, 0, buffer.Length);
  375. bw = new BinaryWriter(Conn.GetStream());
  376. bw.Write(buffer, 0, buffer.Length);
  377. bw.Flush();
  378. bw.BaseStream.Flush();
  379. ms.Close();
  380. }
  381. catch (Exception e)
  382. {
  383. //MyAssert(false, "Query: Unable to create read/write streams: " + e.Message);
  384. return null;
  385. }
  386.  
  387. /* get response */
  388. byte[] response = GetResponse(Conn, VER_COMMAND_SEARCH);
  389.  
  390. /* parse response */
  391. SphinxResult[] results = ParseResponse(response);
  392.  
  393. /* reset requests */
  394. _requestQueries = new List<byte[]>();
  395.  
  396. return results;
  397. }
  398.  
  399. private SphinxResult[] ParseResponse(byte[] response)
  400. {
  401. if (response == null) return null;
  402.  
  403. /* parse response */
  404. SphinxResult[] results = new SphinxResult[_requestQueries.Count];
  405.  
  406. BinaryReader br = new BinaryReader(new MemoryStream(response));
  407.  
  408. /* read schema */
  409. int ires;
  410. try
  411. {
  412. for (ires = 0; ires < _requestQueries.Count; ires++)
  413. {
  414. SphinxResult res = new SphinxResult();
  415. results[ires] = res;
  416.  
  417. int status = ReadInt32(br);
  418. res.setStatus(status);
  419. if (status != SEARCHD_OK)
  420. {
  421. string message = ReadUtf8(br);
  422. if (status == SEARCHD_WARNING)
  423. {
  424. res.warning = message;
  425. }
  426. else
  427. {
  428. res.error = message;
  429. continue;
  430. }
  431. }
  432.  
  433. /* read fields */
  434. int nfields = ReadInt32(br);
  435. res.fields = new string[nfields];
  436. //int pos = 0;
  437. for (int i = 0; i < nfields; i++)
  438. res.fields[i] = ReadUtf8(br);
  439.  
  440. /* read arrts */
  441. int nattrs = ReadInt32(br);
  442. res.attrTypes = new int[nattrs];
  443. res.attrNames = new string[nattrs];
  444. for (int i = 0; i < nattrs; i++)
  445. {
  446. string AttrName = ReadUtf8(br);
  447. int AttrType = ReadInt32(br);
  448. res.attrNames[i] = AttrName;
  449. res.attrTypes[i] = AttrType;
  450. }
  451.  
  452. /* read match count */
  453. int count = ReadInt32(br);
  454. int id64 = ReadInt32(br);
  455. res.matches = new SphinxMatch[count];
  456. for (int matchesNo = 0; matchesNo < count; matchesNo++)
  457. {
  458. SphinxMatch docInfo;
  459. docInfo = new SphinxMatch(
  460. (id64 == 0) ? ReadUInt32(br) : ReadInt64(br),
  461. ReadInt32(br));
  462.  
  463. /* read matches */
  464. for (int attrNumber = 0; attrNumber < res.attrTypes.Length; attrNumber++)
  465. {
  466. string attrName = res.attrNames[attrNumber];
  467. int type = res.attrTypes[attrNumber];
  468.  
  469. /* handle bigint */
  470. if (type == SPH_ATTR_BIGINT)
  471. {
  472. docInfo.attrValues.Add(ReadInt64(br));
  473. continue;
  474. }
  475.  
  476. /* handle floats */
  477. if (type == SPH_ATTR_FLOAT)
  478. {
  479. docInfo.attrValues.Add(ReadFloat(br));
  480. //docInfo.attrValues.add ( attrNumber, bw.ReadDouble ) );
  481. //throw new NotImplementedException("we don't read floats yet");
  482. continue;
  483. }
  484.  
  485. /* handle everything else as unsigned ints */
  486. long val = ReadUInt32(br);
  487. if ((type & SPH_ATTR_MULTI) != 0)
  488. {
  489. long[] vals = new long[(int)val];
  490. for (int k = 0; k < val; k++)
  491. vals[k] = ReadUInt32(br);
  492.  
  493. docInfo.attrValues.Add(vals);
  494.  
  495. }
  496. else
  497. {
  498. docInfo.attrValues.Add(val);
  499. }
  500. }
  501. res.matches[matchesNo] = docInfo;
  502. }
  503.  
  504. res.total = ReadInt32(br);
  505. res.totalFound = ReadInt32(br);
  506. res.time = ReadInt32(br) / 1000.0f; /* format is %.3f */
  507. int wordsCount = ReadInt32(br);
  508.  
  509. //res.words = new SphinxWordInfo[ReadInt32(bw)];
  510. //for (int i = 0; i < res.words.Length; i++)
  511. // res.words[i] = new SphinxWordInfo(ReadUtf8(bw), ReadUInt32(bw), ReadUInt32(bw));
  512. }
  513.  
  514. br.Close();
  515. return results;
  516.  
  517. }
  518. catch (IOException e)
  519. {
  520. //MyAssert(false, "unable to parse response: " + e.Message);
  521. return null;
  522. }
  523. }
  524.  
  525. private TcpClient Conn
  526. {
  527. get
  528. {
  529. try
  530. {
  531. if (_conn == null || !_conn.Connected)
  532. {
  533. _conn = new TcpClient(_host, _port);
  534.  
  535. NetworkStream ns = _conn.GetStream();
  536. BinaryReader sr = new BinaryReader(ns);
  537. BinaryWriter sw = new BinaryWriter(ns);
  538.  
  539. // check the version.
  540. WriteToStream(sw, 1);
  541. sw.Flush();
  542. int version = 0;
  543. version = ReadInt32(sr);
  544.  
  545. if (version < 1)
  546. {
  547. _conn.Close();
  548. // "expected searchd protocol version 1+, got version " + version;
  549. _conn = null;
  550. return null;
  551. }
  552.  
  553. // set persist connect
  554. WriteToStream(sw, (short)4); // COMMAND_Persist
  555. WriteToStream(sw, (short)0); //PERSIST_COMMAND_VERSION
  556. WriteToStream(sw, 4); // COMMAND_LENGTH
  557. WriteToStream(sw, 1); // PERSIST_COMMAND_BODY
  558. sw.Flush();
  559. }
  560. }
  561. catch (IOException e)
  562. {
  563. try
  564. {
  565. _conn.Close();
  566. }
  567. catch
  568. {
  569. _conn = null;
  570. }
  571. return null;
  572. }
  573. return _conn;
  574. }
  575. }
  576.  
  577. #endregion
  578.  
  579. #region Getters and Setters
  580. /**
  581. * Get last error message, if any.
  582. *
  583. * @return string with last error message (empty string if no errors occured)
  584. */
  585. public string GetLastError()
  586. {
  587. return _error;
  588. }
  589.  
  590. /**
  591. * Get last warning message, if any.
  592. *
  593. * @return string with last warning message (empty string if no errors occured)
  594. */
  595. public string GetLastWarning()
  596. {
  597. return _warning;
  598. }
  599.  
  600. /**
  601. * Set searchd host and port to connect to.
  602. *
  603. * @param host searchd host name (default: localhost)
  604. * @param port searchd port number (default: 3312)
  605. *
  606. * @throws SphinxException on invalid parameters
  607. */
  608. public void SetServer(string host, int port)
  609. {
  610. //MyAssert(host != null && host.Length > 0, "host name must not be empty");
  611. //MyAssert(port > 0 && port < 65536, "port must be in 1..65535 range");
  612. _host = host;
  613. _port = port;
  614. }
  615.  
  616. /** Set matches offset and limit to return to client, max matches to retrieve on server, and cutoff. */
  617. public void SetLimits(int offset, int limit, int max, int cutoff)
  618. {
  619. //MyAssert(offset >= 0, "offset must be greater than or equal to 0");
  620. //MyAssert(limit > 0, "limit must be greater than 0");
  621. //MyAssert(max > 0, "max must be greater than 0");
  622. //MyAssert(cutoff >= 0, "max must be greater than or equal to 0");
  623.  
  624. _offset = offset;
  625. _limit = limit;
  626. _maxMatches = max;
  627. _cutoff = cutoff;
  628. }
  629.  
  630. /** Set matches offset and limit to return to client, and max matches to retrieve on server. */
  631. public void SetLimits(int offset, int limit, int max)
  632. {
  633. SetLimits(offset, limit, max, _cutoff);
  634. }
  635.  
  636. /** Set matches offset and limit to return to client. */
  637. public void SetLimits(int offset, int limit)
  638. {
  639. SetLimits(offset, limit, _maxMatches, _cutoff);
  640. }
  641.  
  642. /** Set matching mode. */
  643. public void SetMatchMode(int mode)
  644. {
  645. //MyAssert(
  646. // mode == SPH_MATCH_ALL ||
  647. // mode == SPH_MATCH_ANY ||
  648. // mode == SPH_MATCH_PHRASE ||
  649. // mode == SPH_MATCH_BOOLEAN ||
  650. // mode == SPH_MATCH_EXTENDED, "unknown mode value; use one of the available SPH_MATCH_xxx constants");
  651. _mode = mode;
  652. }
  653.  
  654. /** Set sorting mode. */
  655. public void SetSortMode(int mode, string sortby)
  656. {
  657. //MyAssert(
  658. // mode == SPH_SORT_RELEVANCE ||
  659. // mode == SPH_SORT_ATTR_DESC ||
  660. // mode == SPH_SORT_ATTR_ASC ||
  661. // mode == SPH_SORT_TIME_SEGMENTS ||
  662. // mode == SPH_SORT_EXTENDED, "unknown mode value; use one of the available SPH_SORT_xxx constants");
  663. //MyAssert(mode == SPH_SORT_RELEVANCE || (sortby != null && sortby.Length > 0), "sortby string must not be empty in selected mode");
  664.  
  665. _sort = mode;
  666. _sortby = (sortby == null) ? "" : sortby;
  667. }
  668.  
  669. /** Set per-field weights (all values must be positive). */
  670. public void SetWeights(int[] weights)
  671. {
  672. //MyAssert(weights != null, "weights must not be null");
  673. for (int i = 0; i < weights.Length; i++)
  674. {
  675. int weight = weights[i];
  676. //MyAssert(weight > 0, "all weights must be greater than 0");
  677. }
  678. _weights = weights;
  679. }
  680.  
  681. public void SetFieldWeights(string field, int weight)
  682. {
  683. if (this._fieldWeights.ContainsKey(field)) this._fieldWeights[field] = weight;
  684. else this._fieldWeights.Add(field, weight);
  685. }
  686.  
  687. /**
  688. * Set per-index weights
  689. *
  690. * @param indexWeights hash which maps string index names to Integer weights
  691. */
  692. public void SetIndexWeights(string index, int weight)
  693. {
  694. if (this._indexWeights.ContainsKey(index)) this._indexWeights[index] = weight;
  695. else this._indexWeights.Add(index, weight);
  696. }
  697.  
  698. /**
  699. * Set document IDs range to match.
  700. *
  701. * Only match those documents where document ID is beetwen given
  702. * min and max values (including themselves).
  703. *
  704. * @param min minimum document ID to match
  705. * @param max maximum document ID to match
  706. *
  707. * @throws SphinxException on invalid parameters
  708. */
  709. public void SetIDRange(int min, int max)
  710. {
  711. //MyAssert(min <= max, "min must be less or equal to max");
  712. _minId = min;
  713. _maxId = max;
  714. }
  715.  
  716. /**
  717. * Set values filter.
  718. *
  719. * Only match those documents where <code>attribute</code> column value
  720. * is in given values set.
  721. *
  722. * @param attribute attribute name to filter by
  723. * @param values values set to match the attribute value by
  724. * @param exclude whether to exclude matching documents instead
  725. *
  726. * @throws SphinxException on invalid parameters
  727. */
  728. public void SetFilter(string attribute, long[] values, bool exclude)
  729. {
  730. //MyAssert(values != null && values.Length > 0, "values array must not be null or empty");
  731. //MyAssert(attribute != null && attribute.Length > 0, "attribute name must not be null or empty");
  732. if (values == null || values.Length == 0) return;
  733.  
  734. try
  735. {
  736. BinaryWriter bw = new BinaryWriter(_filterStreamData);
  737.  
  738. WriteToStream(bw, attribute);
  739. WriteToStream(bw, SPH_FILTER_VALUES);
  740. WriteToStream(bw, values.Length);
  741.  
  742. for (int i = 0; i < values.Length; i++)
  743. WriteToStream(bw, values[i]);
  744.  
  745. WriteToStream(bw, exclude ? 1 : 0);
  746.  
  747. }
  748. catch (Exception e)
  749. {
  750. //MyAssert(false, "IOException: " + e.Message);
  751. }
  752. _filterCount++;
  753. }
  754.  
  755. public void SetFilter(string attribute, int[] values, bool exclude)
  756. {
  757. //MyAssert(values != null && values.Length > 0, "values array must not be null or empty");
  758. //MyAssert(attribute != null && attribute.Length > 0, "attribute name must not be null or empty");
  759. if (values == null || values.Length == 0) return;
  760. long[] v = new long[values.Length];
  761. for (int i = 0; i < values.Length; i++) v[i] = (long)values[i];
  762. SetFilter(attribute, v, exclude);
  763. }
  764.  
  765. /** Set values filter with a single value (syntax sugar; see {@link #SetFilter(string,int[],bool)}). */
  766. public void SetFilter(string attribute, long value, bool exclude)
  767. {
  768. long[] values = new long[] { value };
  769. SetFilter(attribute, values, exclude);
  770. }
  771.  
  772. /** Set values filter with a single value (syntax sugar; see {@link #SetFilter(string,int[],bool)}). */
  773. public void SetFilter(string attribute, int value, bool exclude)
  774. {
  775. long[] values = new long[] { value };
  776. SetFilter(attribute, values, exclude);
  777. }
  778.  
  779. public void SetFilter(string attribute, bool value, bool exclude)
  780. {
  781. SetFilter(attribute, value ? 1 : 0, exclude);
  782. }
  783.  
  784. public void SetFilter(string attribute, DateTime value, bool exclude)
  785. {
  786. SetFilter(attribute, ConvertToUnixTimestamp(value), exclude);
  787. }
  788.  
  789. public void SetFilter(string attribute, DateTime[] values, bool exclude)
  790. {
  791. if (values == null || values.Length == 0) return;
  792. int[] items = new int[values.Length];
  793. for (int i = 0; i < items.Length; i++) items[i] = ConvertToUnixTimestamp(values[i]);
  794. SetFilter(attribute, items, exclude);
  795. }
  796.  
  797. /**
  798. * Set integer range filter.
  799. *
  800. * Only match those documents where <code>attribute</code> column value
  801. * is beetwen given min and max values (including themselves).
  802. *
  803. * @param attribute attribute name to filter by
  804. * @param min min attribute value
  805. * @param max max attribute value
  806. * @param exclude whether to exclude matching documents instead
  807. *
  808. * @throws SphinxException on invalid parameters
  809. */
  810. public void SetFilterRange(string attribute, int min, int max, bool exclude)
  811. {
  812. SetFilterRange(attribute, (long)min, (long)max, exclude);
  813. }
  814.  
  815. public void SetFilterRange(string attribute, DateTime min, DateTime max, bool exclude)
  816. {
  817. SetFilterRange(attribute, ConvertToUnixTimestamp(min), ConvertToUnixTimestamp(max), exclude);
  818. }
  819.  
  820. public void SetFilterRange(string attribute, long min, long max, bool exclude)
  821. {
  822. //MyAssert(min <= max, "min must be less or equal to max");
  823. try
  824. {
  825. BinaryWriter bw = new BinaryWriter(_filterStreamData);
  826. WriteToStream(bw, attribute);
  827. WriteToStream(bw, SPH_FILTER_RANGE);
  828. WriteToStream(bw, min);
  829. WriteToStream(bw, max);
  830. WriteToStream(bw, exclude ? 1 : 0);
  831.  
  832. }
  833. catch (Exception e)
  834. {
  835. //MyAssert(false, "IOException: " + e.Message);
  836. }
  837. _filterCount++;
  838. }
  839.  
  840. /**
  841. * Set float range filter.
  842. *
  843. * Only match those documents where <code>attribute</code> column value
  844. * is beetwen given min and max values (including themselves).
  845. *
  846. * @param attribute attribute name to filter by
  847. * @param min min attribute value
  848. * @param max max attribute value
  849. * @param exclude whether to exclude matching documents instead
  850. *
  851. * @throws SphinxException on invalid parameters
  852. * Set float range filter.
  853. */
  854. public void SetFilterFloatRange(string attribute, float min, float max, bool exclude)
  855. {
  856. //MyAssert(min <= max, "min must be less or equal to max");
  857. try
  858. {
  859. BinaryWriter bw = new BinaryWriter(_filterStreamData);
  860. WriteToStream(bw, attribute);
  861. WriteToStream(bw, SPH_FILTER_FLOATRANGE);
  862. WriteToStream(bw, min);
  863. WriteToStream(bw, max);
  864. WriteToStream(bw, exclude ? 1 : 0);
  865.  
  866. }
  867. catch (Exception e)
  868. {
  869. //MyAssert(false, "IOException: " + e.Message);
  870. }
  871. _filterCount++;
  872. }
  873.  
  874. /** Reset all currently set filters (for multi-queries). */
  875. public void ResetFilters()
  876. {
  877. /* should we close them first? */
  878. _filterStreamData = new MemoryStream();
  879. _filterCount = 0;
  880.  
  881. /* reset GEO anchor */
  882. _latitudeAttr = null;
  883. _longitudeAttr = null;
  884. _latitude = 0;
  885. _longitude = 0;
  886. }
  887.  
  888. /**
  889. * Setup geographical anchor point.
  890. *
  891. * Required to use @geodist in filters and sorting.
  892. * Distance will be computed to this point.
  893. *
  894. * @param latitudeAttr the name of latitude attribute
  895. * @param longitudeAttr the name of longitude attribute
  896. * @param latitude anchor point latitude, in radians
  897. * @param longitude anchor point longitude, in radians
  898. *
  899. * @throws SphinxException on invalid parameters
  900. */
  901. public void SetGeoAnchor(string latitudeAttr, string longitudeAttr, float latitude, float longitude)
  902. {
  903. //MyAssert(latitudeAttr != null && latitudeAttr.Length > 0, "longitudeAttr string must not be null or empty");
  904. //MyAssert(longitudeAttr != null && longitudeAttr.Length > 0, "longitudeAttr string must not be null or empty");
  905.  
  906. _latitudeAttr = latitudeAttr;
  907. _longitudeAttr = longitudeAttr;
  908. _latitude = latitude;
  909. _longitude = longitude;
  910. }
  911.  
  912. /** Set grouping attribute and function. */
  913. public void SetGroupBy(string attribute, int func, string groupsort)
  914. {
  915. //MyAssert(
  916. // func == SPH_GROUPBY_DAY ||
  917. // func == SPH_GROUPBY_WEEK ||
  918. // func == SPH_GROUPBY_MONTH ||
  919. // func == SPH_GROUPBY_YEAR ||
  920. // func == SPH_GROUPBY_ATTR ||
  921. // func == SPH_GROUPBY_ATTRPAIR, "unknown func value; use one of the available SPH_GROUPBY_xxx constants");
  922.  
  923. _groupBy = attribute;
  924. _groupFunc = func;
  925. _groupSort = groupsort;
  926. }
  927.  
  928. /** Set grouping attribute and function with default ("@group desc") groupsort (syntax sugar). */
  929. public void SetGroupBy(string attribute, int func)
  930. {
  931. SetGroupBy(attribute, func, "@group desc");
  932. }
  933.  
  934. /** Set count-distinct attribute for group-by queries. */
  935. public void SetGroupDistinct(string attribute)
  936. {
  937. _groupDistinct = attribute;
  938. }
  939.  
  940. /** Set distributed retries count and delay. */
  941. public void SetRetries(int count, int delay)
  942. {
  943. //MyAssert(count >= 0, "count must not be negative");
  944. //MyAssert(delay >= 0, "delay must not be negative");
  945. _retrycount = count;
  946. _retrydelay = delay;
  947. }
  948.  
  949. /** Set distributed retries count with default (zero) delay (syntax sugar). */
  950. public void SetRetries(int count)
  951. {
  952. SetRetries(count, 0);
  953. }
  954.  
  955. #endregion
  956.  
  957. #region Private Methods
  958.  
  959. /** Get and check response packet from searchd (internal method). */
  960. private byte[] GetResponse(TcpClient sock, int client_ver)
  961. {
  962. /* connect */
  963. BinaryReader br = null;
  964. NetworkStream SockInput = null;
  965. try
  966. {
  967. SockInput = sock.GetStream();
  968. br = new BinaryReader(SockInput);
  969. }
  970. catch (IOException e)
  971. {
  972. //MyAssert(false, "getInputStream() failed: " + e.Message);
  973. return null;
  974. }
  975.  
  976. /* read response */
  977. byte[] response = null;
  978. int status = 0, ver = 0;
  979. int len = 0;
  980. try
  981. {
  982. /* read status fields */
  983. status = ReadInt16(br);
  984. ver = ReadInt16(br);
  985. len = ReadInt32(br);
  986.  
  987. /* read response if non-empty */
  988. //MyAssert(len > 0, "zero-sized searchd response body");
  989. if (len > 0)
  990. {
  991. response = br.ReadBytes(len);
  992. }
  993. else
  994. {
  995. /* FIXME! no response, return null? */
  996. }
  997.  
  998. /* check status */
  999. if (status == SEARCHD_WARNING)
  1000. {
  1001. //DataInputStream in = new DataInputStream ( new ByteArrayInputStream ( response ) );
  1002.  
  1003. //int iWarnLen = in.ReadInt32 ();
  1004. //_warning = new string ( response, 4, iWarnLen );
  1005.  
  1006. //System.arraycopy ( response, 4+iWarnLen, response, 0, response.Length-4-iWarnLen );
  1007. _error = "searchd warning";
  1008. return null;
  1009.  
  1010. }
  1011. else if (status == SEARCHD_ERROR)
  1012. {
  1013. _error = "searchd error: " + Encoding.UTF8.GetString(response, 4, response.Length - 4);
  1014. return null;
  1015.  
  1016. }
  1017. else if (status == SEARCHD_RETRY)
  1018. {
  1019. _error = "temporary searchd error: " + Encoding.UTF8.GetString(response, 4, response.Length - 4);
  1020. return null;
  1021.  
  1022. }
  1023. else if (status != SEARCHD_OK)
  1024. {
  1025. _error = "searched returned unknown status, code=" + status;
  1026. return null;
  1027. }
  1028.  
  1029. }
  1030. catch (IOException e)
  1031. {
  1032. if (len != 0)
  1033. {
  1034. /* get trace, to provide even more failure details */
  1035. //PrintWriter ew = new PrintWriter ( new StringWriter() );
  1036. //e.printStackTrace ( ew );
  1037. //ew.flush ();
  1038. //ew.close ();
  1039. //string sTrace = ew.toString ();
  1040.  
  1041. /* build error message */
  1042. _error = "failed to read searchd response (status=" + status + ", ver=" + ver + ", len=" + len + ", trace=" + e.StackTrace + ")";
  1043. }
  1044. else
  1045. {
  1046. _error = "received zero-sized searchd response (searchd crashed?): " + e.Message;
  1047. }
  1048. return null;
  1049.  
  1050. }
  1051. finally
  1052. {
  1053. try
  1054. {
  1055. if (br != null) br.Close();
  1056. if (sock != null && !sock.Connected) sock.Close();
  1057. }
  1058. catch (IOException e)
  1059. {
  1060. /* silently ignore close failures; nothing could be done anyway */
  1061. }
  1062. }
  1063.  
  1064. return response;
  1065. }
  1066.  
  1067. /** Connect to searchd and exchange versions (internal method). */
  1068. //private TcpClient Connect()
  1069. //{
  1070. // TcpClient sock;
  1071. // try
  1072. // {
  1073. // //sock = new Socket(_host, _port);
  1074. // //sock = new Socket(AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.IPv4);
  1075. // sock = new TcpClient(_host, _port);
  1076. // //sock.ReceiveTimeout = SPH_CLIENT_TIMEOUT_MILLISEC;
  1077. // }
  1078. // catch (Exception e)
  1079. // {
  1080. // _error = "connection to " + _host + ":" + _port + " failed: " + e.Message;
  1081. // return null;
  1082. // }
  1083.  
  1084. // NetworkStream ns = null;
  1085.  
  1086. // try
  1087. // {
  1088. // ns = sock.GetStream();
  1089.  
  1090. // BinaryReader sr = new BinaryReader(ns);
  1091. // BinaryWriter sw = new BinaryWriter(ns);
  1092.  
  1093. // WriteToStream(sw, 1);
  1094. // sw.Flush();
  1095. // int version = 0;
  1096. // version = ReadInt32(sr);
  1097.  
  1098. // if (version < 1)
  1099. // {
  1100. // sock.Close();
  1101. // _error = "expected searchd protocol version 1+, got version " + version;
  1102. // return null;
  1103. // }
  1104.  
  1105. // //WriteToStream(sw, VER_MAJOR_PROTO);
  1106. // //sw.Flush();
  1107.  
  1108. // WriteToStream(sw, (short)4); // COMMAND_Persist
  1109. // WriteToStream(sw, (short)0); //PERSIST_COMMAND_VERSION
  1110. // WriteToStream(sw, 4); // COMMAND_LENGTH
  1111. // WriteToStream(sw, 1); // PERSIST_COMMAND_BODY
  1112. // sw.Flush();
  1113. // }
  1114. // catch (IOException e)
  1115. // {
  1116. // _error = "Connect: Read from socket failed: " + e.Message;
  1117. // try
  1118. // {
  1119. // sock.Close();
  1120. // }
  1121. // catch (IOException e1)
  1122. // {
  1123. // _error = _error + " Cannot close socket: " + e1.Message;
  1124. // }
  1125. // return null;
  1126. // }
  1127. // return sock;
  1128. //}
  1129.  
  1130. #endregion
  1131.  
  1132. #region Network IO Helpers
  1133. private string ReadUtf8(BinaryReader br)
  1134. {
  1135. int length = ReadInt32(br);
  1136.  
  1137. if (length > 0)
  1138. {
  1139. byte[] data = br.ReadBytes(length);
  1140. return Encoding.UTF8.GetString(data);
  1141. }
  1142. return "";
  1143. }
  1144.  
  1145. private short ReadInt16(BinaryReader br)
  1146. {
  1147. byte[] idata = br.ReadBytes(2);
  1148. if (BitConverter.IsLittleEndian)
  1149. Array.Reverse(idata);
  1150. return BitConverter.ToInt16(idata, 0);
  1151. //return BitConverter.ToInt16(_Reverse(idata), 0);
  1152. }
  1153. private int ReadInt32(BinaryReader br)
  1154. {
  1155. byte[] idata = br.ReadBytes(4);
  1156. if (BitConverter.IsLittleEndian)
  1157. Array.Reverse(idata);
  1158. return BitConverter.ToInt32(idata, 0);
  1159. //return BitConverter.ToInt32(_Reverse(idata), 0);
  1160. }
  1161.  
  1162. private float ReadFloat(BinaryReader br)
  1163. {
  1164. byte[] idata = br.ReadBytes(4);
  1165. if (BitConverter.IsLittleEndian)
  1166. Array.Reverse(idata);
  1167. return BitConverter.ToSingle(idata, 0);
  1168. //return BitConverter.ToSingle(_Reverse(idata), 0);
  1169. }
  1170.  
  1171. private uint ReadUInt32(BinaryReader br)
  1172. {
  1173. byte[] idata = br.ReadBytes(4);
  1174. if (BitConverter.IsLittleEndian)
  1175. Array.Reverse(idata);
  1176. return BitConverter.ToUInt32(idata, 0);
  1177. //return BitConverter.ToUInt32(_Reverse(idata), 0);
  1178. }
  1179. private Int64 ReadInt64(BinaryReader br)
  1180. {
  1181. byte[] idata = br.ReadBytes(8);
  1182. if (BitConverter.IsLittleEndian)
  1183. Array.Reverse(idata);
  1184. return BitConverter.ToInt64(idata, 0);
  1185. //return BitConverter.ToInt64(_Reverse(idata), 0);
  1186. }
  1187.  
  1188. private void WriteToStream(BinaryWriter bw, short data)
  1189. {
  1190. byte[] d = BitConverter.GetBytes(data);
  1191. if (BitConverter.IsLittleEndian)
  1192. Array.Reverse(d);
  1193. bw.Write(d);
  1194. //sw.Write(_Reverse(d));
  1195. }
  1196. private void WriteToStream(BinaryWriter bw, int data)
  1197. {
  1198. byte[] d = BitConverter.GetBytes(data);
  1199. if (BitConverter.IsLittleEndian)
  1200. Array.Reverse(d);
  1201. bw.Write(d);
  1202. //sw.Write(_Reverse(d));
  1203. }
  1204. private void WriteToStream(BinaryWriter bw, float data)
  1205. {
  1206. byte[] d = BitConverter.GetBytes(data);
  1207. if (BitConverter.IsLittleEndian)
  1208. Array.Reverse(d);
  1209. bw.Write(d);
  1210. //sw.Write(_Reverse(d));
  1211. }
  1212.  
  1213. private void WriteToStream(BinaryWriter bw, long data)
  1214. {
  1215. byte[] d = BitConverter.GetBytes(data);
  1216. if (BitConverter.IsLittleEndian)
  1217. Array.Reverse(d);
  1218. bw.Write(d);
  1219. //sw.Write(_Reverse(d));
  1220. }
  1221. private void WriteToStream(BinaryWriter bw, byte[] data)
  1222. {
  1223. bw.Write(data);
  1224. }
  1225. private void WriteToStream(BinaryWriter bw, string data)
  1226. {
  1227. byte[] d = Encoding.UTF8.GetBytes(data);
  1228. WriteToStream(bw, d.Length);
  1229. bw.Write(d);
  1230. }
  1231. #endregion
  1232.  
  1233. #region Other Helpers
  1234. static readonly DateTime _epoch = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc);
  1235. public static int ConvertToUnixTimestamp(DateTime dateTime)
  1236. {
  1237. TimeSpan diff = dateTime.ToUniversalTime() - _epoch;
  1238. return Convert.ToInt32(Math.Floor(diff.TotalSeconds));
  1239. }
  1240. #endregion
  1241.  
  1242. #region IDisposable Members
  1243.  
  1244. public void Dispose()
  1245. {
  1246. if (this._conn != null)
  1247. {
  1248. try
  1249. {
  1250. if (this._conn.Connected)
  1251. this._conn.Close();
  1252. }
  1253. finally
  1254. {
  1255. this._conn = null;
  1256. }
  1257. }
  1258. }
  1259.  
  1260. #endregion
  1261. }
  1262.  
  1263. public class SphinxResult
  1264. {
  1265. /** Full-text field namess. */
  1266. public String[] fields;
  1267.  
  1268. /** Attribute names. */
  1269. public String[] attrNames;
  1270.  
  1271. /** Attribute types (refer to SPH_ATTR_xxx constants in SphinxClient). */
  1272. public int[] attrTypes;
  1273.  
  1274. /** Retrieved matches. */
  1275. public SphinxMatch[] matches;
  1276.  
  1277. /** Total matches in this result set. */
  1278. public int total;
  1279.  
  1280. /** Total matches found in the index(es). */
  1281. public int totalFound;
  1282.  
  1283. /** Elapsed time (as reported by searchd), in seconds. */
  1284. public float time;
  1285.  
  1286. /** Per-word statistics. */
  1287. public SphinxWordInfo[] words;
  1288.  
  1289. /** Warning message, if any. */
  1290. public String warning = null;
  1291.  
  1292. /** Error message, if any. */
  1293. public String error = null;
  1294.  
  1295. /** Query status (refer to SEARCHD_xxx constants in SphinxClient). */
  1296. private int status = -1;
  1297.  
  1298. /** Trivial constructor, initializes an empty result set. */
  1299. public SphinxResult()
  1300. {
  1301. this.attrNames = new String[0];
  1302. this.matches = new SphinxMatch[0]; ;
  1303. this.words = new SphinxWordInfo[0];
  1304. this.fields = new String[0];
  1305. this.attrTypes = new int[0];
  1306. }
  1307.  
  1308. public bool Success
  1309. {
  1310. get
  1311. {
  1312. return this.status == SphinxClient.SEARCHD_OK;
  1313. }
  1314. }
  1315.  
  1316. /** Get query status. */
  1317. public int getStatus()
  1318. {
  1319. return status;
  1320. }
  1321.  
  1322. /** Set query status (accessible from API package only). */
  1323. internal void setStatus(int status)
  1324. {
  1325. this.status = status;
  1326. }
  1327. }
  1328.  
  1329. public class SphinxMatch
  1330. {
  1331. /** Matched document ID. */
  1332. public long docId;
  1333.  
  1334. /** Matched document weight. */
  1335. public int weight;
  1336.  
  1337. /** Matched document attribute values. */
  1338. public ArrayList attrValues;
  1339.  
  1340. /** Trivial constructor. */
  1341. public SphinxMatch(long docId, int weight)
  1342. {
  1343. this.docId = docId;
  1344. this.weight = weight;
  1345. this.attrValues = new ArrayList();
  1346. }
  1347. }
  1348.  
  1349. public class SphinxWordInfo
  1350. {
  1351. /** Word form as returned from search daemon, stemmed or otherwise postprocessed. */
  1352. public String word;
  1353.  
  1354. /** Total amount of matching documents in collection. */
  1355. public long docs;
  1356.  
  1357. /** Total amount of hits (occurences) in collection. */
  1358. public long hits;
  1359.  
  1360. /** Trivial constructor. */
  1361. public SphinxWordInfo(String word, long docs, long hits)
  1362. {
  1363. this.word = word;
  1364. this.docs = docs;
  1365. this.hits = hits;
  1366. }
  1367. }
  1368. }

  

看看调试成功的样子

ok,收工

sphinx在c#.net平台下使用(一)的更多相关文章

  1. Mac OS平台下应用程序安装包制作工具Packages的使用介绍

    一.介绍 Windows下面开发好的应用程序要进行分发时有很多打包工具可供选择,如Inno Setup, InstallShield, NSIS, Advanced Installer, Qt Ins ...

  2. Windows平台下利用APM来做负载均衡方案 - 负载均衡(下)

    概述 我们在上一篇Windows平台分布式架构实践 - 负载均衡中讨论了Windows平台下通过NLB(Network Load Balancer) 来实现网站的负载均衡,并且通过压力测试演示了它的效 ...

  3. windows平台下基于VisualStudio的Clang安装和配置

    LLVM 是一个开源的编译器架构,它已经被成功应用到多个应用领域.Clang是 LLVM 的一个编译器前端,它目前支持 C, C++, Objective-C 以及 Objective-C++ 等编程 ...

  4. .NET平台下开源框架

    一.AOP框架Encase 是C#编写开发的为.NET平台提供的AOP框架.Encase 独特的提供了把方面(aspects)部署到运行时代码,而其它AOP框架依赖配置文件的方式.这种部署方面(asp ...

  5. .net平台下垃圾回收机制

    引言:使用c++进行编程,内存的处理绝对是让每个程序设计者最头疼的一块了.但是对于.net平台下使用c#语言开发系统,内存管理可以说已经不算是问题了.在.net平台下CLR负责管理内存,CLR中的垃圾 ...

  6. linux平台下防火墙iptables原理(转)

    原文地址:http://www.cnblogs.com/ggjucheng/archive/2012/08/19/2646466.html iptables简介 netfilter/iptables( ...

  7. Windows及Linux平台下的计时函数总结

    本文对Windows及Linux平台下常用的计时函数进行总结,包括精度为秒.毫秒.微秒三种精度的各种函数.比如Window平台下特有的Windows API函数GetTickCount().timeG ...

  8. Thrift在Windows及Linux平台下的安装和使用示例

    本文章也同时发表在个人博客Thrift在Windows及Linux平台下的安装和使用示例上. thrift介绍 Apache Thrift 是 Facebook 实现的一种高效的.支持多种编程语言的R ...

  9. 消息队列系列(一):.Net平台下的消息队列介绍

    本系列主要记录最近学习消息队列的一些心得体会,打算形成一个系列文档.开篇主要介绍一下.Net平台下一些主流的消息队列框架.       RabbitMQ:http://www.rabbitmq.com ...

随机推荐

  1. 转: CentOS 6.4安装pip,CentOS安装python包管理安装工具pip的方法

    from: http://www.linuxde.net/2014/05/15576.html CentOS 6.4安装pip,CentOS安装python包管理安装工具pip的方法 2014/05/ ...

  2. jquery的children方法和css3选择器配合使用

    $(".pid").children("ul:nth-child(2)");//获取拥有pid类元素下的第二个ul元素 $(".pid"). ...

  3. mysql高可用方案总结性说明

    MySQL的各种高可用方案,大多是基于以下几种基础来部署的(也可参考:Mysql优化系列(0)--总结性梳理   该文后面有提到)1)基于主从复制:2)基于Galera协议(PXC):3)基于NDB引 ...

  4. vue 2.0-1

    vue 2.0 开发实践总结之疑难篇   续上一篇文章:vue2.0 开发实践总结之入门篇 ,如果没有看过的可以移步看一下. 本篇文章目录如下: 1.  vue 组件的说明和使用 2.  vuex在实 ...

  5. Linux 进程通信(无名管道)

    无名管道 无名管道是半双工的,就是对于一个管道来讲,只能读,或者写. 无名管道只能在相关的,有共同祖先的进程间使用(即一般用户父子进程). 一个fork或者execve调用创建的子进程继承了父进程的文 ...

  6. SpringMVC视图解析器(转)

    前言 在前一篇博客中讲了SpringMVC的Controller控制器,在这篇博客中将接着介绍一下SpringMVC视图解析器.当我们对SpringMVC控制的资源发起请求时,这些请求都会被Sprin ...

  7. 解决 Windows Update 时提示当前无法检查更新,因为未运行服务

    故障:打开“Windows Update”出现红色盾牌图标 点击“检查更新”,出现“Windows Update 当前无法检查更新,因为未运行服务.您可能需要重新启动计算机” 查看“Windows U ...

  8. ZooKeeper学习第五期--ZooKeeper管理分布式环境中的数据

    引言 本节本来是要介绍ZooKeeper的实现原理,但是ZooKeeper的原理比较复杂,它涉及到了paxos算法.Zab协议.通信协议等相关知识,理解起来比较抽象所以还需要借助一些应用场景,来帮我们 ...

  9. django字段设置null和blank的区别

    null 这个选项跟数据库有关. null=True的话,数据库中该字段是NULL,即允许空值:null=False(默认)的话,数据库中该字段是NOT NULL,即不允许空值. blank 这个选项 ...

  10. 足球运动训练心得及经验分析-c语言学习调查

    在准备预备作业02之前,我参考娄老师的提示,阅读了<[做中学(Learning By Doing)]之乒乓球刻意训练一年总结>一文. 在文章描述的字里行间,给予我的印象是系统.负责,娄老师 ...