﻿<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:trackback="http://madskills.com/public/xml/rss/module/trackback/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/"><channel><title>BlogJava-西瓜地儿-随笔分类-Lucene</title><link>http://www.blogjava.net/ashutc/category/45558.html</link><description>沈阳求职（java3年以上经验）！ashutc@126.com</description><language>zh-cn</language><lastBuildDate>Fri, 15 Apr 2011 09:12:56 GMT</lastBuildDate><pubDate>Fri, 15 Apr 2011 09:12:56 GMT</pubDate><ttl>60</ttl><item><title>lucene评分分析</title><link>http://www.blogjava.net/ashutc/archive/2011/04/15/348339.html</link><dc:creator>西瓜</dc:creator><author>西瓜</author><pubDate>Fri, 15 Apr 2011 03:02:00 GMT</pubDate><guid>http://www.blogjava.net/ashutc/archive/2011/04/15/348339.html</guid><wfw:comment>http://www.blogjava.net/ashutc/comments/348339.html</wfw:comment><comments>http://www.blogjava.net/ashutc/archive/2011/04/15/348339.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ashutc/comments/commentRss/348339.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ashutc/services/trackbacks/348339.html</trackback:ping><description><![CDATA[<br />
<div class="blog_content">
<p><span style="font-size: medium;">在IndexSearcher类中有一个管理Lucene得分情况的方法，如下所示：</span></p>
<p><span style="font-size: medium;">public Explanation explain(Weight weight, int doc) throws IOException {<br />
&nbsp;&nbsp;&nbsp; return weight.explain(reader, doc);<br />
}</span></p>
<p><span style="font-size: medium;">返回的这个Explanation的实例解释了Lucene中Document的得分情况。我们可以测试一下，直观地感觉一下到底这个Explanation的实例都记录了一个Document的哪些信息。</span></p>
<p><span style="font-size: medium;">写一个测试类，如下所示：</span></p>
<p><span style="font-size: medium;">package org.shirdrn.lucene.learn;</span></p>
<p><span style="font-size: medium;">import java.io.IOException;<br />
import java.util.Date;</span></p>
<p><span style="font-size: medium;">import net.teamhot.lucene.ThesaurusAnalyzer;</span></p>
<p><span style="font-size: medium;">import org.apache.lucene.document.Document;<br />
import org.apache.lucene.document.Field;<br />
import org.apache.lucene.index.CorruptIndexException;<br />
import org.apache.lucene.index.IndexWriter;<br />
import org.apache.lucene.index.Term;<br />
import org.apache.lucene.index.TermDocs;<br />
import org.apache.lucene.search.Explanation;<br />
import org.apache.lucene.search.Hits;<br />
import org.apache.lucene.search.IndexSearcher;<br />
import org.apache.lucene.search.Query;<br />
import org.apache.lucene.search.TermQuery;<br />
import org.apache.lucene.store.LockObtainFailedException;</span></p>
<p><span style="font-size: medium;">public class AboutLuceneScore {<br />
<br />
private String path = "E:\\Lucene\\index";<br />
<br />
public void createIndex(){<br />
&nbsp;&nbsp; IndexWriter writer;<br />
&nbsp;&nbsp; try {<br />
&nbsp;&nbsp;&nbsp; writer = new IndexWriter(path,new ThesaurusAnalyzer(),true);<br />
&nbsp;&nbsp;&nbsp;<br />
&nbsp;&nbsp;&nbsp; Field fieldA = new Field("contents","一人",Field.Store.YES,Field.Index.TOKENIZED); <br />
&nbsp;&nbsp;&nbsp; Document docA = new Document(); <br />
&nbsp;&nbsp;&nbsp; docA.add(fieldA);<br />
&nbsp;&nbsp;&nbsp;<br />
&nbsp;&nbsp;&nbsp; Field fieldB = new Field("contents","一人 之交 一人之交",Field.Store.YES,Field.Index.TOKENIZED);<br />
&nbsp;&nbsp;&nbsp; Document docB = new Document(); <br />
&nbsp;&nbsp;&nbsp; docB.add(fieldB);<br />
&nbsp;&nbsp;&nbsp;<br />
&nbsp;&nbsp;&nbsp; Field fieldC = new Field("contents","一人 之下 一人之下",Field.Store.YES,Field.Index.TOKENIZED);<br />
&nbsp;&nbsp;&nbsp; Document docC = new Document(); <br />
&nbsp;&nbsp;&nbsp; docC.add(fieldC);<br />
&nbsp;&nbsp;&nbsp;<br />
&nbsp;&nbsp;&nbsp; Field fieldD = new Field("contents","一人 做事 一人当 一人做事一人当",Field.Store.YES,Field.Index.TOKENIZED); <br />
&nbsp;&nbsp;&nbsp; Document docD = new Document(); <br />
&nbsp;&nbsp;&nbsp; docD.add(fieldD);<br />
&nbsp;&nbsp;&nbsp;<br />
&nbsp;&nbsp;&nbsp; Field fieldE = new Field("contents","一人 做事 一人當 一人做事一人當",Field.Store.YES,Field.Index.TOKENIZED);<br />
&nbsp;&nbsp;&nbsp; Document docE = new Document(); <br />
&nbsp;&nbsp;&nbsp; docE.add(fieldE);</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp; writer.addDocument(docA);<br />
&nbsp;&nbsp;&nbsp; writer.addDocument(docB);<br />
&nbsp;&nbsp;&nbsp; writer.addDocument(docC);<br />
&nbsp;&nbsp;&nbsp; writer.addDocument(docD);<br />
&nbsp;&nbsp;&nbsp; writer.addDocument(docE);<br />
&nbsp;&nbsp;&nbsp;<br />
&nbsp;&nbsp;&nbsp; writer.close();<br />
&nbsp;&nbsp; } catch (CorruptIndexException e) {<br />
&nbsp;&nbsp;&nbsp; e.printStackTrace();<br />
&nbsp;&nbsp; } catch (LockObtainFailedException e) {<br />
&nbsp;&nbsp;&nbsp; e.printStackTrace();<br />
&nbsp;&nbsp; } catch (IOException e) {<br />
&nbsp;&nbsp;&nbsp; e.printStackTrace();<br />
&nbsp;&nbsp; }<br />
}<br />
<br />
public static void main(String[] args) {<br />
&nbsp;&nbsp; AboutLuceneScore aus = new AboutLuceneScore();<br />
&nbsp;&nbsp; aus.createIndex();&nbsp;&nbsp;&nbsp;</span><span style="font-size: medium;"><span style="color: #339966;"> // 建立索引<br />
</span>&nbsp;&nbsp; try {<br />
&nbsp;&nbsp;&nbsp; String keyword = "一人";<br />
&nbsp;&nbsp;&nbsp; Term term = new Term("contents",keyword);<br />
&nbsp;&nbsp;&nbsp; Query query = new TermQuery(term); <br />
&nbsp;&nbsp;&nbsp; IndexSearcher searcher = new IndexSearcher(aus.path);<br />
&nbsp;&nbsp;&nbsp; Date startTime = new Date();<br />
&nbsp;&nbsp;&nbsp; Hits hits = searcher.search(query);<br />
&nbsp;&nbsp;&nbsp; TermDocs termDocs = searcher.getIndexReader().termDocs(term);<br />
&nbsp;&nbsp;&nbsp; while(termDocs.next()){<br />
&nbsp;&nbsp;&nbsp;&nbsp; System.out.print("搜索关键字&lt;"+keyword+"&gt;在编号为 "+termDocs.doc());<br />
&nbsp;&nbsp;&nbsp;&nbsp; System.out.println(" 的Document中出现过 "+termDocs.freq()+" 次");<br />
&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp; System.out.println("********************************************************************");<br />
&nbsp;&nbsp;&nbsp; for(int i=0;i&lt;hits.length();i++){<br />
&nbsp;&nbsp;&nbsp;&nbsp; System.out.println("Document的内部编号为 ： "+hits.id(i));<br />
&nbsp;&nbsp;&nbsp;&nbsp; System.out.println("Document内容为 ： "+hits.doc(i));<br />
&nbsp;&nbsp;&nbsp;&nbsp; System.out.println("Document得分为 ： "+hits.score(i));<br />
&nbsp;&nbsp;&nbsp;&nbsp; Explanation e = searcher.explain(query, hits.id(i));<br />
&nbsp;&nbsp;&nbsp;&nbsp; System.out.println("Explanation为 ： \n"+e);<br />
&nbsp;&nbsp;&nbsp;&nbsp; System.out.println("Document对应的Explanation的一些参数值如下： ");<br />
&nbsp;&nbsp;&nbsp;&nbsp; System.out.println("Explanation的getValue()为 ： "+e.getValue());<br />
&nbsp;&nbsp;&nbsp;&nbsp; System.out.println("Explanation的getDescription()为 ： "+e.getDescription());<br />
&nbsp;&nbsp;&nbsp;&nbsp; System.out.println("********************************************************************");<br />
&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp; System.out.println("共检索出符合条件的Document "+hits.length()+" 个。");<br />
&nbsp;&nbsp;&nbsp; Date finishTime = new Date();<br />
&nbsp;&nbsp;&nbsp; long timeOfSearch = finishTime.getTime() - startTime.getTime();<br />
&nbsp;&nbsp;&nbsp; System.out.println("本次搜索所用的时间为 "+timeOfSearch+" ms");<br />
&nbsp;&nbsp; } catch (CorruptIndexException e) {<br />
&nbsp;&nbsp;&nbsp; e.printStackTrace();<br />
&nbsp;&nbsp; } catch (IOException e) {<br />
&nbsp;&nbsp;&nbsp; e.printStackTrace();<br />
&nbsp;&nbsp; }<br />
&nbsp;&nbsp;<br />
}<br />
}</span></p>
<p><span style="font-size: medium;">该测试类中实现了一个建立索引的方法createIndex()方法；然后通过检索一个关键字&#8220;一人&#8221;，获取到与它相关的Document的信息。</span></p>
<p><span style="font-size: medium;">打印出结果的第一部分为：这个检索关键字&#8220;一人&#8221;在每个Document中出现的次数。</span></p>
<p><span style="font-size: medium;">打印出结果的第二部分为：相关的Explanation及其得分情况的信息。</span></p>
<p><span style="font-size: medium;">测试结果输出如下所示：</span></p>
<p><span style="font-size: medium;">搜索关键字&lt;一人&gt;在编号为 0 的Document中出现过 1 次<br />
搜索关键字&lt;一人&gt;在编号为 1 的Document中出现过 1 次<br />
搜索关键字&lt;一人&gt;在编号为 2 的Document中出现过 1 次<br />
搜索关键字&lt;一人&gt;在编号为 3 的Document中出现过 2 次<br />
搜索关键字&lt;一人&gt;在编号为 4 的Document中出现过 2 次<br />
********************************************************************<br />
Document的内部编号为 ： 0<br />
Document内容为 ： Document&lt;stored/uncompressed,indexed,tokenized&lt;contents:一人&gt;&gt;<br />
Document得分为 ： 0.81767845<br />
Explanation为 ： <br />
0.81767845 = (MATCH) fieldWeight(contents:一人 in 0), product of:<br />
1.0 = tf(termFreq(contents:一人)=1)<br />
0.81767845 = idf(docFreq=5)<br />
1.0 = fieldNorm(field=contents, doc=0)</span></p>
<p><span style="font-size: medium;">Document对应的Explanation的一些参数值如下： <br />
Explanation的getValue()为 ： 0.81767845<br />
Explanation的getDescription()为 ： fieldWeight(contents:一人 in 0), product of:<br />
********************************************************************<br />
Document的内部编号为 ： 3<br />
Document内容为 ： Document&lt;stored/uncompressed,indexed,tokenized&lt;contents:一人 做事 一人当 一人做事一人当&gt;&gt;<br />
Document得分为 ： 0.5059127<br />
Explanation为 ： <br />
0.5059127 = (MATCH) fieldWeight(contents:一人 in 3), product of:<br />
1.4142135 = tf(termFreq(contents:一人)=2)<br />
0.81767845 = idf(docFreq=5)<br />
0.4375 = fieldNorm(field=contents, doc=3)</span></p>
<p><span style="font-size: medium;">Document对应的Explanation的一些参数值如下： <br />
Explanation的getValue()为 ： 0.5059127<br />
Explanation的getDescription()为 ： fieldWeight(contents:一人 in 3), product of:<br />
********************************************************************<br />
Document的内部编号为 ： 4<br />
Document内容为 ： Document&lt;stored/uncompressed,indexed,tokenized&lt;contents:一人 做事 一人當 一人做事一人當&gt;&gt;<br />
Document得分为 ： 0.5059127<br />
Explanation为 ： <br />
0.5059127 = (MATCH) fieldWeight(contents:一人 in 4), product of:<br />
1.4142135 = tf(termFreq(contents:一人)=2)<br />
0.81767845 = idf(docFreq=5)<br />
0.4375 = fieldNorm(field=contents, doc=4)</span></p>
<p><span style="font-size: medium;">Document对应的Explanation的一些参数值如下： <br />
Explanation的getValue()为 ： 0.5059127<br />
Explanation的getDescription()为 ： fieldWeight(contents:一人 in 4), product of:<br />
********************************************************************<br />
Document的内部编号为 ： 1<br />
Document内容为 ： Document&lt;stored/uncompressed,indexed,tokenized&lt;contents:一人 之交 一人之交&gt;&gt;<br />
Document得分为 ： 0.40883923<br />
Explanation为 ： <br />
0.40883923 = (MATCH) fieldWeight(contents:一人 in 1), product of:<br />
1.0 = tf(termFreq(contents:一人)=1)<br />
0.81767845 = idf(docFreq=5)<br />
0.5 = fieldNorm(field=contents, doc=1)</span></p>
<p><span style="font-size: medium;">Document对应的Explanation的一些参数值如下： <br />
Explanation的getValue()为 ： 0.40883923<br />
Explanation的getDescription()为 ： fieldWeight(contents:一人 in 1), product of:<br />
********************************************************************<br />
Document的内部编号为 ： 2<br />
Document内容为 ： Document&lt;stored/uncompressed,indexed,tokenized&lt;contents:一人 之下 一人之下&gt;&gt;<br />
Document得分为 ： 0.40883923<br />
Explanation为 ： <br />
0.40883923 = (MATCH) fieldWeight(contents:一人 in 2), product of:<br />
1.0 = tf(termFreq(contents:一人)=1)<br />
0.81767845 = idf(docFreq=5)<br />
0.5 = fieldNorm(field=contents, doc=2)</span></p>
<p><span style="font-size: medium;">Document对应的Explanation的一些参数值如下： <br />
Explanation的getValue()为 ： 0.40883923<br />
Explanation的getDescription()为 ： fieldWeight(contents:一人 in 2), product of:<br />
********************************************************************<br />
共检索出符合条件的Document 5 个。<br />
本次搜索所用的时间为 79 ms</span></p>
<p><span style="font-size: medium;">先从测试的输出结果进行分析，可以获得到如下信息：</span></p>
<p><span style="font-size: medium;">■ 测试类中hits.score(i)的值与Explanation的getValue()的值是一样的，即Lucene默认使用的得分；</span></p>
<p><span style="font-size: medium;">■ 默认情况下，Lucene按照Document的得分进行排序检索结果；</span></p>
<p><span style="font-size: medium;">■ 默认情况下，如果两个Document的得分相同，按照Document的内部编号进行排序，比如上面编号为(3和4)、(1和2)是两组得分相同的Document，结果排序时按照Document的编号进行了排序；</span></p>
<p><span style="font-size: medium;">通过从IndexSearcher类中的explain方法：</span></p>
<p><span style="font-size: medium;">public Explanation explain(Weight weight, int doc) throws IOException {<br />
&nbsp;&nbsp;&nbsp; return weight.explain(reader, doc);<br />
}</span></p>
<p><span style="font-size: medium;">可以看出，实际上是调用了Weight接口类中的explain()方法，而Weight是与一个Query相关的，它记录了一次查询构造的Query的情况，从而保证一个Query实例可以重用。</span></p>
<p><span style="font-size: medium;">具体地，可以在实现Weight接口的具体类TermWeight中追溯到explain()方法，而TermWeight类是一个内部类，定义在TermQuery类内部。TermWeight类的explain()方法如下所示：</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp; public Explanation explain(IndexReader reader, int doc)<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; throws IOException {</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ComplexExplanation result = new ComplexExplanation();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; result.setDescription("weight("+getQuery()+" in "+doc+"), product of:");</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <span style="color: #ff0000;">Explanation idfExpl = new Explanation(idf, "idf(docFreq=" + reader.docFreq(term) + ")");</span></span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <span style="color: #339966;">// explain query weight</span><br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Explanation queryExpl = new Explanation();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; queryExpl.setDescription("queryWeight(" + getQuery() + "), product of:");</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Explanation boostExpl = new Explanation(getBoost(), "boost");<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (getBoost() != 1.0f)<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; queryExpl.addDetail(boostExpl);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; queryExpl.addDetail(idfExpl);</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Explanation queryNormExpl = new Explanation(queryNorm,"queryNorm");<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; queryExpl.addDetail(queryNormExpl);</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; queryExpl.setValue(boostExpl.getValue() *idfExpl.getValue() *queryNormExpl.getValue());</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; result.addDetail(queryExpl);</span></p>
<p><span style="font-size: medium;"><span style="color: #339966;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; // 说明Field的权重<br />
</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; String field = term.field();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ComplexExplanation fieldExpl = new ComplexExplanation();<br />
<span style="color: #ff0000;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; fieldExpl.setDescription("fieldWeight("+term+" in "+doc+"), product of:");</span></span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Explanation tfExpl = scorer(reader).explain(doc);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; fieldExpl.addDetail(tfExpl);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; fieldExpl.addDetail(idfExpl);</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Explanation fieldNormExpl = new Explanation();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; byte[] fieldNorms = reader.norms(field);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; float fieldNorm =<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; fieldNorms!=null ? Similarity.decodeNorm(fieldNorms[doc]) : 0.0f;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; fieldNormExpl.setValue(fieldNorm);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <span style="color: #ff0000;">fieldNormExpl.setDescription("fieldNorm(field="+field+", doc="+doc+")");</span><br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; fieldExpl.addDetail(fieldNormExpl);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; fieldExpl.setMatch(Boolean.valueOf(tfExpl.isMatch()));<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; fieldExpl.setValue(tfExpl.getValue() *idfExpl.getValue() *fieldNormExpl.getValue());</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; result.addDetail(fieldExpl);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; result.setMatch(fieldExpl.getMatch());<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span><span style="font-size: medium;"><span style="color: #339966;">// combine them<br />
</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; result.setValue(queryExpl.getValue() * fieldExpl.getValue());</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (queryExpl.getValue() == 1.0f)<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return fieldExpl;</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return result;<br />
&nbsp;&nbsp;&nbsp; }</span></p>
<p><span style="font-size: medium;">根据检索结果，以及上面的TermWeight类的explain()方法，可以看出输出的字符串部分正好一一对应，比如：idf(Inverse Document Frequency，即反转文档频率)、fieldNorm、fieldWeight。</span></p>
<p><span style="font-size: medium;">检索结果的第一个Document的信息：</span></p>
<p><span style="font-size: medium;">Document的内部编号为 ： 0<br />
Document内容为 ： Document&lt;stored/uncompressed,indexed,tokenized&lt;contents:一人&gt;&gt;<br />
Document得分为 ： 0.81767845<br />
Explanation为 ： <br />
0.81767845 = (MATCH) fieldWeight(contents:一人 in 0), product of:<br />
</span><span style="font-size: medium;"><span style="color: #ff0000;">1.0 = tf(termFreq(contents:一人)=1)<br />
</span>0.81767845 = idf(docFreq=5)<br />
1.0 = fieldNorm(field=contents, doc=0)</span></p>
<p><span style="font-size: medium;">Document对应的Explanation的一些参数值如下： <br />
Explanation的getValue()为 ： 0.81767845<br />
Explanation的getDescription()为 ： fieldWeight(contents:一人 in 0), product of:</span></p>
<p><strong><span style="font-size: medium;">tf的计算</span></strong></p>
<p><span style="font-size: medium;">上面的tf值Term Frequency，即词条频率，可以在org.apache.lucene.search.Similarity类中看到具体地说明。在Lucene中，并不是直接使用的词条的频率，而实际使用的词条频率的平方根，即：</span></p>
<p>
</p>
<table class="FCK__ShowTableBorders" align="center" border="0" cellpadding="2" cellspacing="2">
    <tbody>
        <tr>
            <td valign="middle" align="right"><span style="font-size: medium;"><code><span style="font-family: NSimsun;">tf(t in d)</span></code> =</span></td>
            <td valign="top" align="center"><span style="font-size: medium;">frequency<sup><big>&#189;</big></sup></span></td>
        </tr>
    </tbody>
</table>
<p><span style="font-size: medium;">这是使用org.apache.lucene.search.Similarity类的子类DefaultSimilarity中的方法计算的，如下：</span></p>
<p><span style="font-size: medium;"><span style="color: #339966;">/** Implemented as &lt;code&gt;sqrt(freq)&lt;/code&gt;. */</span><br />
public float tf(float freq) {<br />
&nbsp;&nbsp;&nbsp; return (float)Math.sqrt(freq);<br />
}</span></p>
<p><span style="font-size: medium;">即：某个Document的tf = 检索的词条在该Document中出现次数freq取平方根值</span></p>
<p><span style="font-size: medium;">也就是freq的平方根。</span></p>
<p><span style="font-size: medium;">例如，从我们的检索结果来看：</span></p>
<p><span style="font-size: medium;">搜索关键字&lt;一人&gt;在编号为 0 的Document中出现过 1 次<br />
搜索关键字&lt;一人&gt;在编号为 1 的Document中出现过 1 次<br />
搜索关键字&lt;一人&gt;在编号为 2 的Document中出现过 1 次<br />
搜索关键字&lt;一人&gt;在编号为 3 的Document中出现过 2 次<br />
搜索关键字&lt;一人&gt;在编号为 4 的Document中出现过 2 次</span></p>
<p><span style="font-size: medium;">各个Document的tf计算如下所示：</span></p>
<p><span style="font-size: medium;">编号为0的Document的 tf 为： (float)Math.sqrt(1) = 1.0；<br />
编号为1的Document的 tf 为： (float)Math.sqrt(1) = 1.0；<br />
编号为2的Document的 tf 为： (float)Math.sqrt(1) = 1.0；<br />
编号为3的Document的 tf 为： (float)Math.sqrt(2) = 1.4142135；<br />
编号为4的Document的 tf 为： (float)Math.sqrt(2) = 1.4142135；</span></p>
<p><strong><span style="font-size: medium;">idf的计算</span></strong></p>
<p><span style="font-size: medium;">检索结果中，每个检索出来的Document的都对应一个idf，在DefaultSimilarity类中可以看到idf计算的实现方法，如下：</span></p>
<p><span style="font-size: medium;"><span style="color: #339966;">/** Implemented as &lt;code&gt;log(numDocs/(docFreq+1)) + 1&lt;/code&gt;. */</span><br />
public float idf(int docFreq, int numDocs) {<br />
&nbsp;&nbsp;&nbsp; return (float)(Math.log(numDocs/(double)(docFreq+1)) + 1.0);<br />
}</span></p>
<p><span style="font-size: medium;">其中，docFreq是根据指定关键字进行检索，检索到的Document的数量，我们测试的docFreq=5；numDocs是指索引文件中总共的Document的数量，我们的测试比较特殊，将全部的Document都检索出来了，我们测试的numDocs=5。</span></p>
<p><span style="font-size: medium;">各个Document的idf的计算如下所示：</span></p>
<p><span style="font-size: medium;">编号为0的Document的 idf 为：(float)(Math.log(5/(double)(5+1)) + 1.0) = 0.81767845；<br />
编号为1的Document的 idf 为：(float)(Math.log(5/(double)(5+1)) + 1.0) = 0.81767845；<br />
编号为2的Document的 idf 为：(float)(Math.log(5/(double)(5+1)) + 1.0) = 0.81767845；<br />
编号为3的Document的 idf 为：(float)(Math.log(5/(double)(5+1)) + 1.0) = 0.81767845；<br />
编号为4的Document的 idf 为：(float)(Math.log(5/(double)(5+1)) + 1.0) = 0.81767845；</span></p>
<p><strong><span style="font-size: medium;">lengthNorm的计算</span></strong></p>
<p><span style="font-size: medium;">在DefaultSimilarity类中可以看到lengthNorm计算的实现方法，如下：</span></p>
<p><span style="font-size: medium;">public float lengthNorm(String fieldName, int numTerms) {<br />
&nbsp;&nbsp;&nbsp; return (float)(1.0 / Math.sqrt(numTerms));<br />
}</span></p>
<p><span style="font-size: medium;">各个Document的lengthNorm的计算如下所示：</span></p>
<p><span style="font-size: medium;">编号为0的Document的 lengthNorm 为：(float)(1.0 / Math.sqrt(1)) = 1.0/1.0 = 1.0；<br />
编号为1的Document的 lengthNorm 为：(float)(1.0 / Math.sqrt(1)) = 1.0/1.0 = 1.0；<br />
编号为2的Document的 lengthNorm 为：(float)(1.0 / Math.sqrt(1)) = 1.0/1.0 = 1.0；<br />
编号为3的Document的 lengthNorm 为：(float)(1.0 / Math.sqrt(2)) = 1.0/1.4142135 = 0.7071068；<br />
编号为4的Document的 lengthNorm 为：(float)(1.0 / Math.sqrt(2)) = 1.0/1.4142135 = 0.7071068；</span></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p><strong><span style="font-size: medium;">关于fieldNorm</span></strong></p>
<p><span style="font-size: medium;">fieldNorm是在建立索引的时候写入的，而检索的时候需要从索引文件中读取，然后通过解码，得到fieldNorm的float型值，用于计算Document的得分。</span></p>
<p><span style="font-size: medium;">在org.apache.lucene.search.TermQuery.TermWeight类中，explain方法通过打开的IndexReader流读取fieldNorm，写入索引文件的是byte[]类型，需要解码，如下所示：</span></p>
<p><span style="font-size: medium;">byte[] fieldNorms = reader.norms(field);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; float fieldNorm = fieldNorms!=null ? Similarity.decodeNorm(fieldNorms[doc]) : 0.0f;</span></p>
<p><span style="font-size: medium;">调用Similarity类的decodeNorm方法，将byte[]类型值转化为float浮点值：</span></p>
<p><span style="font-size: medium;">public static float decodeNorm(byte b) {<br />
&nbsp;&nbsp;&nbsp; return NORM_TABLE[b &amp; 0xFF]; <span style="color: #339966;">// &amp; 0xFF maps negative bytes to positive above 127</span><br />
}</span></p>
<p><span style="font-size: medium;">这样，一个浮点型的fieldNorm的值就被读取出来了，可以参加一些运算，最终实现Lucene的Document的得分的计算。</span></p>
<p><strong><span style="font-size: medium;">queryWeight的计算</span></strong></p>
<p><span style="font-size: medium;">queryWeight的计算可以在org.apache.lucene.search.TermQuery.TermWeight类中的sumOfSquaredWeights方法中看到计算的实现：</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp; public float sumOfSquaredWeights() {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; queryWeight = idf * getBoost();&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-size: medium;"><span style="color: #339966;"> // compute query weight<br />
</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return queryWeight * queryWeight;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-size: medium;"><span style="color: #339966;"> // square it<br />
</span>&nbsp;&nbsp;&nbsp; }</span></p>
<p><span style="font-size: medium;">其实默认情况下，queryWeight = idf，因为Lucune中默认的激励因子boost = 1.0。</span></p>
<p><span style="font-size: medium;">各个Document的queryWeight的计算如下所示：</span></p>
<p><span style="font-size: medium;">queryWeight = 0.81767845 * 0.81767845 = 0.6685980475944025；</span></p>
<p><strong><span style="font-size: medium;">queryNorm的计算</span></strong></p>
<p><span style="font-size: medium;">queryNorm的计算在DefaultSimilarity类中实现，如下所示：</span></p>
<p><span style="font-size: medium;"><span style="color: #339966;">/** Implemented as &lt;code&gt;1/sqrt(sumOfSquaredWeights)&lt;/code&gt;. */</span><br />
public float queryNorm(float sumOfSquaredWeights) {<br />
&nbsp;&nbsp;&nbsp; return (float)(1.0 / Math.sqrt(sumOfSquaredWeights));<br />
}</span></p>
<p><span style="font-size: medium;">这里，sumOfSquaredWeights的计算是在org.apache.lucene.search.TermQuery.TermWeight类中的sumOfSquaredWeights方法实现：</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp; public float sumOfSquaredWeights() {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; queryWeight = idf * getBoost();&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-size: medium;"><span style="color: #339966;"> // compute query weight<br />
</span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return queryWeight * queryWeight;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-size: medium;"><span style="color: #339966;"> // square it<br />
</span>&nbsp;&nbsp;&nbsp; }</span></p>
<p><span style="font-size: medium;">其实默认情况下，sumOfSquaredWeights = idf * idf，因为Lucune中默认的激励因子boost = 1.0。</span></p>
<p><span style="font-size: medium;">上面测试例子中sumOfSquaredWeights的计算如下所示：</span></p>
<p><span style="font-size: medium;">sumOfSquaredWeights = 0.81767845*0.81767845 = 0.6685980475944025；</span></p>
<p><span style="font-size: medium;">然后，就可以计算queryNorm的值了，计算如下所示：</span></p>
<p><span style="font-size: medium;">queryNorm = (float)(1.0 / Math.sqrt(0.6685980475944025) = 1.2229746301862302962735534977105；</span></p>
<p><strong><span style="font-size: medium;">value的计算</span></strong></p>
<p><span style="font-size: medium;">org.apache.lucene.search.TermQuery.TermWeight类类中还定义了一个value成员：</span></p>
<p><span style="font-size: medium;">private float value;</span></p>
<p><span style="font-size: medium;">关于value的计算，可以在它的子类org.apache.lucene.search.TermQuery.TermWeight类中看到计算的实现：</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp; public void normalize(float queryNorm) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; this.queryNorm = queryNorm;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; queryWeight *= queryNorm;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <span style="color: #339966;">// normalize query weight</span><br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; value = queryWeight * idf;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #339966;"> // idf for document</span><br />
&nbsp;&nbsp;&nbsp; }</span></p>
<p><span style="font-size: medium;">这里，使用normalize方法计算value的值，即：</span></p>
<p><span style="font-size: medium;">value = queryNorm * queryWeight * idf;</span></p>
<p><span style="font-size: medium;">上面测试例子中value的值计算如下：</span></p>
<p><span style="font-size: medium;">value = 1.2229746301862302962735534977105 * 0.6685980475944025 * 0.81767845 = 0.66859804759440249999999999999973；</span></p>
<p><strong><span style="font-size: medium;">关于fieldWeight</span></strong></p>
<p><span style="font-size: medium;">从检索结果中，可以看到：</span></p>
<p><span style="font-size: medium;">0.81767845 = (MATCH) fieldWeight(contents:一人 in 0), product of:</span></p>
<p><span style="font-size: medium;">字符串"(MATCH) "的输在ComplexExplanation类中的getSummary方法中可以看到：</span></p>
<p><span style="font-size: medium;">protected String getSummary() {<br />
&nbsp;&nbsp;&nbsp; if (null == getMatch())<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return super.getSummary();<br />
&nbsp;&nbsp;&nbsp; <br />
&nbsp;&nbsp;&nbsp; return getValue() + " = "<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; + (isMatch() ? "(MATCH) " : "(NON-MATCH) ")<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; + getDescription();<br />
}</span></p>
<p><span style="font-size: medium;">这个fieldWeight的值其实和Document的得分是相等的，先看这个fieldWeight是如何计算出来的，在org.apache.lucene.search.TermQuery.TermWeight类中的explain方法中可以看到：</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ComplexExplanation <span style="color: #ff0000;">fieldExpl</span> = new ComplexExplanation();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <span style="color: #ff0000;">fieldExpl</span>.setDescription("fieldWeight("+term+" in "+doc+<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "), product of:");</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Explanation tfExpl = scorer(reader).explain(doc);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <span style="color: #ff0000;">fieldExpl</span>.addDetail(tfExpl);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <span style="color: #ff0000;">fieldExpl</span>.addDetail(idfExpl);</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Explanation fieldNormExpl = new Explanation();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; byte[] fieldNorms = reader.norms(field);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; float fieldNorm =<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; fieldNorms!=null ? Similarity.decodeNorm(fieldNorms[doc]) : 0.0f;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; fieldNormExpl.setValue(fieldNorm);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; fieldNormExpl.setDescription("fieldNorm(field="+field+", doc="+doc+")");<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <span style="color: #ff0000;">fieldExpl</span>.addDetail(fieldNormExpl);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <span style="color: #ff0000;">fieldExpl</span>.setMatch(Boolean.valueOf(tfExpl.isMatch()));<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <span style="color: #ff0000;">fieldExpl</span>.setValue(tfExpl.getValue() *<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; idfExpl.getValue() *<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; fieldNormExpl.getValue());</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; result.addDetail(<span style="color: #ff0000;">fieldExpl</span>);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; result.setMatch(<span style="color: #ff0000;">fieldExpl</span>.getMatch());<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #339966;"> // combine them</span><br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; result.setValue(queryExpl.getValue() * <span style="color: #ff0000;">fieldExpl</span>.getValue());</span></p>
<p><span style="font-size: medium;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (queryExpl.getValue() == 1.0f)<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return <span style="color: #ff0000;">fieldExpl</span>;</span></p>
<p><span style="font-size: medium;">上面，ComplexExplanation fieldExpl被设置了很多项内容，我们就从这里来获取fieldWeight的计算的实现。</span></p>
<p><span style="font-size: medium;">关键是在下面进行了计算：</span></p>
<p><span style="font-size: medium;"><span style="color: #ff0000;">fieldExpl</span>.setValue(tfExpl.getValue() *<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; idfExpl.getValue() *<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; fieldNormExpl.getValue());</span></p>
<p><span style="font-size: medium;">使用计算式表示就是</span></p>
<p><span style="font-size: medium;">fieldWeight = tf * idf * fieldNorm</span></p>
<p><span style="font-size: medium;">fieldNorm的值因为是在建立索引的时候写入到索引文件中的，索引只需要从上面的测试结果中取来，进行如下关于Document的分数的计算的验证。</span></p>
<p><span style="font-size: medium;">使用我们这个例子来计算检索出来的Docuyment的fieldWeight，需要用到前面计算出来的结果，如下所示：</span></p>
<p><span style="font-size: medium;">编号为0的Document的 fieldWeight 为：1.0 * 0.81767845 * 1.0 = 0.81767845；<br />
编号为1的Document的 fieldWeight 为：1.0 * 0.81767845 * 0.5 = 0.408839225；<br />
编号为2的Document的 fieldWeight 为：1.0 * 0.81767845 * 0.5 = 0.408839225；<br />
编号为3的Document的 fieldWeight 为：1.4142135 * 0.81767845 * 0.4375 = 0.5059127074089703125；<br />
编号为4的Document的 fieldWeight 为：1.4142135 * 0.81767845 * 0.4375 = 0.5059127074089703125；</span></p>
<p><span style="font-size: medium;">对比一下，其实检索结果中Document的得分就是这个fieldWeight的值，验证后，正好相符(注意：我这里没有进行舍入运算)。</span></p>
<p><strong><span style="font-size: medium;">总结说明</span></strong></p>
<p><span style="font-size: medium;">上面的计算得分是按照Lucene默认设置的情况下进行的，比如激励因子的默认值为1.0，它体现的是一个Document的重要性，即所谓的fieldWeight。</span></p>
<p><span style="font-size: medium;">不仅可以通过为一个Document设置激励因子boost，而且可以通过为一个Document中的Field设置boost，因为一个Document的权重体现在它当中的Field上，即上面计算出来的fieldWeight与Document的得分是相等的。</span></p>
<p><span style="font-size: medium;">提高一个Document的激励因子boost，可以使该Document被检索出来的默认排序靠前，即说明比较重要。也就是说，修改激励因子boost能够改变检索结果的排序。</span></p>
</div>
<br />
<br />
<br />
<br />
<br />
<img src ="http://www.blogjava.net/ashutc/aggbug/348339.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ashutc/" target="_blank">西瓜</a> 2011-04-15 11:02 <a href="http://www.blogjava.net/ashutc/archive/2011/04/15/348339.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Sphinx</title><link>http://www.blogjava.net/ashutc/archive/2011/04/01/347467.html</link><dc:creator>西瓜</dc:creator><author>西瓜</author><pubDate>Fri, 01 Apr 2011 06:13:00 GMT</pubDate><guid>http://www.blogjava.net/ashutc/archive/2011/04/01/347467.html</guid><wfw:comment>http://www.blogjava.net/ashutc/comments/347467.html</wfw:comment><comments>http://www.blogjava.net/ashutc/archive/2011/04/01/347467.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ashutc/comments/commentRss/347467.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ashutc/services/trackbacks/347467.html</trackback:ping><description><![CDATA[<strong></strong> 首先我们要从 Sphinx 官网上 http://www.sphinxsearch.com/downloads.html 下载
mysql-5.0.45-sphinxse-0.9.8-win32.zip 和
sphinx-0.9.8.1-win32.zip，假设你已经安装好了 MySQL
<br />
<br />
先将 mysql 服务停掉 解压 mysql-5.0.45-sphinxse-0.9.8-win32.zip 将 bin 和 share
覆盖掉 mysql 目录中的 bin 和 share 解压 sphinx-0.9.8.1-win32.zip
到独立的目录，如:d:/www/sphinx/中
<br />
<br />
接着开启 mysql 服务，建立 "test" 数据库，并导入 sql 语句,如下：
<br />
<br />
-----------------------------------------------------------
<br />
<br />
CREATE TABLE `documents` (
<br />
`id` int(11) NOT NULL auto_increment,
<br />
`group_id` int(11) NOT NULL,
<br />
`group_id2` int(11) NOT NULL,
<br />
`date_added` datetime NOT NULL,
<br />
`title` varchar(255) NOT NULL,
<br />
`content` text NOT NULL,
<br />
PRIMARY KEY (`id`)
<br />
) ENGINE=InnoDB AUTO_INCREMENT=5;
<br />
<br />
INSERT INTO `documents` VALUES ('1', '1', '5', '2008-09-13
21:37:47', 'test one', 'this is my test document number one. also
checking search within phrases.');
<br />
INSERT INTO `documents` VALUES ('2', '1', '6', '2008-09-13 21:37:47', 'test two', 'this is my test document number two');
<br />
INSERT INTO `documents` VALUES ('3', '2', '7', '2008-09-13 21:37:47', 'another doc', 'this is another group');
<br />
INSERT INTO `documents` VALUES ('4', '2', '8', '2008-09-13 21:37:47', 'doc number four', 'this is to test groups');
<br />
<br />
-------------------------------------------实际上，这个新建立的表就是 Sphinx 中的 example.sql
<br />
<br />
我们的测试表已经建立完成，接下来我们要配置 sphinx-doc.conf 文件（重要）
<br />
<br />
先将 sphinx 下的 sphinx-min.conf 复制一份改名为 sphinx-doc.conf，接着 修改它:
<br />
<br />
----------------------------------------------------------------------
<br />
<br />
#
<br />
# Minimal Sphinx configuration sample (clean, simple, functional)
<br />
#
<br />
# type----------------------------------------数据库类型，目前支持 mysql 与 pgsql
<br />
# strip_html--------------------------------是否去掉html 标签
<br />
# sql_host----------------------------------数据库主机地址
<br />
# sql_user----------------------------------数据库用户名
<br />
# sql_pass----------------------------------数据库密码
<br />
# sql_db-------------------------------------数据库名称
<br />
# sql_port-----------------------------------数据库采用的端口
<br />
# sql_query_pre--------------------------执行sql前要设置的字符集，用utf8必须SET NAMES utf8
<br />
#
sql_query---------------------------------全文检索要显示的内容，在这里尽可能不使用where或
group by，将 where 与 groupby 的内容交给 sphinx，由 sphinx 进行条件过滤与 groupby 效率会更高
<br />
# 注意: select 出来的字段必须至少包括一个唯一主键 (ARTICLESID) 以及要全文检索的字段，你计划原本在 where 中要用到的字段也要 select 出来
<br />
# 这里不用使用orderby
<br />
# sql_attr_ 开头的表示一些属性字段，你原计划要用在 where, orderby, groupby 中的字段要在这里定义(# 为自己添加的注释内容)
<br />
<br />
#source 数据源名:
<br />
<br />
source documents
<br />
{
<br />
type&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; = mysql
<br />
sql_host&nbsp;&nbsp;&nbsp;&nbsp; = localhost
<br />
sql_user&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; = root
<br />
sql_pass&nbsp;&nbsp;&nbsp;&nbsp; = yourpassword
<br />
sql_db&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; = test
<br />
sql_port&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; = 3306 # optional, default is 3306
<br />
<br />
sql_query_pre&nbsp;&nbsp;&nbsp;&nbsp; = SET NAMES utf8
<br />
sql_query&nbsp;&nbsp;&nbsp;&nbsp; = \
<br />
&nbsp;&nbsp; SELECT id, group_id, UNIX_TIMESTAMP(date_added) AS date_added, title, content \
<br />
&nbsp;&nbsp; FROM documents
<br />
<br />
sql_attr_uint&nbsp;&nbsp;&nbsp; = group_id
<br />
sql_attr_timestamp&nbsp;&nbsp; = date_added
<br />
<br />
sql_query_info&nbsp;&nbsp;&nbsp; = SELECT * FROM documents WHERE id=$id
<br />
}
<br />
<br />
<br />
index documents
<br />
{
<br />
source&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; = documents
<br />
<br />
#path&nbsp;&nbsp; 索引记录存放目录，如 d:/sphinx/data/cgfinal ,实际存放时会存放在 d:/sphinx/data 目录，然后创建多个 cgfinal 名称，不同扩展名的索引文件。
<br />
path&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; = d:/www/sphinx/data/doc
<br />
docinfo&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; = extern
<br />
enable_star&nbsp;&nbsp;&nbsp;&nbsp; = 1
<br />
<br />
min_word_len&nbsp;&nbsp;&nbsp;&nbsp; = 3
<br />
min_prefix_len&nbsp;&nbsp;&nbsp;&nbsp; = 0
<br />
min_infix_len&nbsp;&nbsp;&nbsp;&nbsp; = 3
<br />
charset_type&nbsp;&nbsp;&nbsp; = sbcs
<br />
<br />
# 其他的配置如 min_word_len, charset_type, charset_table, ngrams_chars, ngram_len 这些则是支持中文检索需要设置的内容。
<br />
# 如果检索的不是中文，则 charset_table, ngrams_chars, min_word_len 就要设置不同的内容，具体官方网站的论坛中有很多，大家可以去搜索看看。
<br />
}
<br />
<br />
# mem_limit 索引使用内存最大限制，根据机器情况而定，默认是32M，太小的会影响索引的性能。
<br />
indexer
<br />
{
<br />
mem_limit&nbsp;&nbsp;&nbsp;&nbsp; = 32M
<br />
}
<br />
<br />
# 搜索的守护进程配置
<br />
# 在进行全文检索过程中，searchd要先开启，mysql在全文检索时才能连接到sphinx，由sphinx进行全文检索，再将结果返回给mysql
<br />
# address 侦听请求的地址，不设置则侦听所有地址
<br />
# port 侦听端口
<br />
searchd
<br />
{
<br />
port&nbsp;&nbsp;&nbsp;&nbsp; = 3312
<br />
log&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =d:/www/sphinx/logs/searched_doc.log
<br />
query_log&nbsp;&nbsp;&nbsp;&nbsp; = d:/www/sphinx/logs/query_doc.log
<br />
read_timeout&nbsp;&nbsp;&nbsp; = 5
<br />
max_children&nbsp;&nbsp;&nbsp; = 30
<br />
pid_file&nbsp;&nbsp;&nbsp;&nbsp; = d:/www/sphinx/logs/searched-doc.pid
<br />
max_matches&nbsp;&nbsp;&nbsp;&nbsp; = 1000
<br />
seamless_rotate&nbsp;&nbsp;&nbsp; = 0
<br />
preopen_indexes&nbsp;&nbsp;&nbsp; = 0
<br />
unlink_old&nbsp;&nbsp;&nbsp;&nbsp; = 1
<br />
}
<br />
<br />
<br />
----------------------------------------------------------------------
<br />
<br />
<br />
为了测试，我们的 Sphinx 配置文件已经写好，确保我们的 Mysql 数据库已经启动，如果没有启动则在 cmd 中键入" net start mysql "
<br />
<br />
接下来，我们的测试正式开始：
<br />
<br />
1，生成数据索引或重建索引：
<br />
<br />
（最好再复制一个 sphinx-doc.conf 配置文件，并把它放入 bin 文件夹中，下面的举例 假设我们已经这样做）：
<br />
<br />
在 cmd 模式下：输入：
<br />
<br />
d:/www/sphinx/bin/indexer.exe --config d:/www/sphinx/bin/sphinx-doc.conf documents
<br />
<br />
2，运行检索守护进程 searchd.exe：
<br />
<br />
d:/www/sphinx/bin/searchd.exe --config d:/www/sphinx/bin/sphinx-doc.conf
<br />
<br />
如过这两步没有报错的话，说明我们的 Sphinx 已经正常运行了！可以通过 netstat -an 查看是否 3312 端口是否处如监听状态。
<br />
<br />
3，现在来用 sphinx 自带的工具 search.exe 来测试一下：
<br />
<br />
测试：
<br />
<br />
索引关键字： this is m
<br />
<br />
D:\www\sphinx\bin&gt;search.exe -c d:/www/sphinx/bin/sphinx-doc.conf this is m
<br />
<br />
结果：
<br />
<br />
Sphinx 0.9.8-release (r1371)
<br />
Copyright (c) 2001-2008, Andrew Aksyonoff
<br />
<br />
using config file 'd:/www/sphinx/bin/sphinx-doc.conf'...
<br />
WARNING: index 'documents': invalid morphology option 'extern' - IGNORED
<br />
index 'documents': query 'this is m ': returned 4 matches of 4 total in 0.000 s
<br />
c
<br />
<br />
displaying matches:
<br />
1. document=1, weight=1, group_id=1, date_added=Sat Sep 13 21:37:47 2008
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; id=1
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; group_id=1
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; group_id2=5
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; date_added=2008-09-13 21:37:47
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; title=test one
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; content=this is my test document number one. also checking search withi
<br />
phrases.
<br />
2. document=2, weight=1, group_id=1, date_added=Sat Sep 13 21:37:47 2008
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; id=2
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; group_id=1
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; group_id2=6
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; date_added=2008-09-13 21:37:47
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; title=test two
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; content=this is my test document number two
<br />
3. document=3, weight=1, group_id=2, date_added=Sat Sep 13 21:37:47 2008
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; id=3
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; group_id=2
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; group_id2=7
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; date_added=2008-09-13 21:37:47
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; title=another doc
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; content=this is another group
<br />
4. document=4, weight=1, group_id=2, date_added=Sat Sep 13 21:37:47 2008
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; id=4
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; group_id=2
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; group_id2=8
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; date_added=2008-09-13 21:37:47
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; title=doc number four
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; content=this is to test groups
<br />
<br />
words:
<br />
1. 'this': 4 documents, 4 hits
<br />
<br />
-------------------
<br />
<br />
索引关键字： this is another group
<br />
<br />
D:\www\sphinx\bin&gt;search.exe -c d:/www/sphinx/bin/sphinx-doc.conf this is another group
<br />
<br />
结果：
<br />
<br />
Sphinx 0.9.8-release (r1371)
<br />
Copyright (c) 2001-2008, Andrew Aksyonoff
<br />
<br />
-------------------
<br />
<br />
到此sphinx在win上算正常运行了，sphinx-doc.conf文件配置比较灵活，根据你需要索引的数据库进行灵活配置来达到你需要的效果
<br />
<br />
如果配置过程中出现运行参数配置问题可以查看 doc/sphinx.html文件，里面对各种参数都要详细的说明
<br />
<br />
<br />
using config file 'd:/www/sphinx/bin/sphinx-doc.conf'...
<br />
WARNING: index 'documents': invalid morphology option 'extern' - IGNORED
<br />
index 'documents': query 'this is another group ': returned 1 matches of 1 total
<br />
in 0.000 sec
<br />
<br />
displaying matches:
<br />
1. document=3, weight=4, group_id=2, date_added=Sat Sep 13 21:37:47 2008
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; id=3
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; group_id=2
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; group_id2=7
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; date_added=2008-09-13 21:37:47
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; title=another doc
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; content=this is another group
<br />
<br />
words:
<br />
1. 'this': 4 documents, 4 hits
<br />
2. 'another': 1 documents, 2 hits
<br />
3. 'group': 1 documents, 1 hits
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<img src ="http://www.blogjava.net/ashutc/aggbug/347467.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ashutc/" target="_blank">西瓜</a> 2011-04-01 14:13 <a href="http://www.blogjava.net/ashutc/archive/2011/04/01/347467.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>lucene优化</title><link>http://www.blogjava.net/ashutc/archive/2010/09/02/330669.html</link><dc:creator>西瓜</dc:creator><author>西瓜</author><pubDate>Thu, 02 Sep 2010 01:56:00 GMT</pubDate><guid>http://www.blogjava.net/ashutc/archive/2010/09/02/330669.html</guid><wfw:comment>http://www.blogjava.net/ashutc/comments/330669.html</wfw:comment><comments>http://www.blogjava.net/ashutc/archive/2010/09/02/330669.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ashutc/comments/commentRss/330669.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ashutc/services/trackbacks/330669.html</trackback:ping><description><![CDATA[<div class="blog_content">
<p><strong>Boosting Documents and Fields</strong>
<br />
setBoost(float) 设置Documents和Fields在index中的重要性<br />
<br />
可以给document设置boost，也可以给field设置boost<br />
设置boost会删除原来的document然后重新建立索引<br />
<br />
doc.setBoost();<br />
field.setBoost();</p>
<p>&nbsp;</p>
<p>boost是怎样存储到index中的，利用norms<br />
在建立索引过程中生成的boosts会被结合在一起变成一个浮点数，然后每个文档每个字段<br />
都会存为一个byte。在查询过程中，每个field的norms会被装入内存，重新解码为一个浮点数<br />
<br />
即使norms在建立索引的过程中得到，我们也可以用IndexReader的setNorm方法来改变<br />
<br />
norms会在搜索过程中消耗过多的内存<br />
我们可以将norms关闭，Field.setOmitNorms(true)，这样有可能影响评分，但是影响效果<br />
可以忽略<br />
<br />
<strong>indexing dates&amp;times</strong>
<br />
DataTools.dateToString(new Date(),DateTools.Resolution.DAY);<br />
<br />
<strong>Indexing numbers</strong>
<br />
lucene利用词典编排来给field排序，也就是说如果有3个数：7，71，20，正常的排序是：7，20，71。但是词典排序是：20，7，71。一个简单和通用的方法是给数字加前缀0：007，020，071<br />
<strong><br />
indexing fields for sorting</strong>
<br />
field建立索引但是不分词Field.Index.NOT_ANALYZED，字段必须存储Integers,Floats,Strings<br />
<br />
<strong>Field truncation</strong>
<br />
比如说你只想给一个文档前200个字建立索引<br />
在indexWriter的构造方法中传递MaxFieldLength参数<br />
系统设定的值MaxFieldLength.UNLIMITED和MaxFieldLength.LIMITED<br />
<br />
可以调用setMaxFieldLength()方法来修改<br />
<br />
IndexWriter.setInfoStream(System.out) 关于合并，删除的信息以及当maxFieldLength到达会显示信息<br />
<strong><br />
Optimizing an index</strong>
<br />
索引优化只能提高搜索的速度，不会加快建立索引的速度,不进行优化也有可能获得很好的搜索吞吐量</p>
<p><br />
IndexWriter提供4个优化方法</p>
<ul>
    <li>optimize()：将index减少到一个segment，只到操作完成才返回</li>
</ul>
<ul>
    <li>optimize(int maxNumSeqments)：部分优化，一般来说，index合并到最后一个segment最消耗时间，所以优化到5个segment会比优化到1个segment快</li>
</ul>
<ul>
    <li>optimize(boolean doWait)：同optimize()一样，只是当doWait为false的时候，该方法会立刻返回，合并索引操作在后台进行</li>
</ul>
<ul>
    <li>optimize(int maxNumSegments,boolean doWait)</li>
</ul>
</div>
<img src ="http://www.blogjava.net/ashutc/aggbug/330669.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ashutc/" target="_blank">西瓜</a> 2010-09-02 09:56 <a href="http://www.blogjava.net/ashutc/archive/2010/09/02/330669.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Lucene打分公式</title><link>http://www.blogjava.net/ashutc/archive/2010/07/29/327432.html</link><dc:creator>西瓜</dc:creator><author>西瓜</author><pubDate>Thu, 29 Jul 2010 07:15:00 GMT</pubDate><guid>http://www.blogjava.net/ashutc/archive/2010/07/29/327432.html</guid><wfw:comment>http://www.blogjava.net/ashutc/comments/327432.html</wfw:comment><comments>http://www.blogjava.net/ashutc/archive/2010/07/29/327432.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ashutc/comments/commentRss/327432.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ashutc/services/trackbacks/327432.html</trackback:ping><description><![CDATA[<p>在进行Lucene的搜索过程解析之前，有必要单独的一张把Lucene score公式的推导，各部分的意义阐述一下。因为Lucene的搜索过程，很重要的一个步骤就是逐步的计算各部分的分数。</p>
<p>Lucene的打分公式非常复杂，如下：</p>
<p>&nbsp;</p>
<p><img src="http://dl.javaeye.com/upload/attachment/213515/fe3f53e8-8f19-374b-8a7a-aada0d8592f1.png" alt="" /></p>
<p>&nbsp;</p>
<p>在推导之前，先逐个介绍每部分的意义：</p>
<ul>
    <li>t：Term，这里的Term是指包含域信息的Term，也即title:hello和content:hello是不同的Term </li>
    <li>coord(q,d)：一次搜索可能包含多个搜索词，而一篇文档中也可能包含多个搜索词，此项表示，当一篇文档中包含的搜索词越多，则此文档则打分越高。 </li>
    <li>queryNorm(q)：计算每个查询条目的方差和，此值并不影响排序，而仅仅使得不同的query之间的分数可以比较。其公式如下： </li>
</ul>
<div><img src="http://dl.javaeye.com/upload/attachment/213517/b160ae4a-2546-3cdf-b121-6c305400ae14.png" alt="" /></div>
<ul>
    <li>tf(t in d)：Term t在文档d中出现的词频 </li>
    <li>idf(t)：Term t在几篇文档中出现过 </li>
    <li>norm(t, d)：标准化因子，它包括三个参数：
    <ul>
        <li>Document boost：此值越大，说明此文档越重要。 </li>
        <li>Field boost：此域越大，说明此域越重要。 </li>
        <li>lengthNorm(field) = (1.0 / Math.sqrt(numTerms))：一个域中包含的Term总数越多，也即文档越长，此值越小，文档越短，此值越大。 </li>
    </ul>
    </li>
</ul>
<p><img src="http://dl.javaeye.com/upload/attachment/213519/21ff38af-7067-385d-bf41-5bff4a9d6085.png" alt="" /></p>
<p><img src="http://dl.javaeye.com/upload/attachment/213521/b52895db-a7c3-304e-811e-32c91aea440f.png" alt="" />&nbsp; </p>
<p>&nbsp;</p>
<ul>
    <li>各类Boost值
    <ul>
        <li>t.getBoost()：查询语句中每个词的权重，可以在查询中设定某个词更加重要，common^4 hello </li>
        <li>d.getBoost()：文档权重，在索引阶段写入nrm文件，表明某些文档比其他文档更重要。 </li>
        <li>f.getBoost()：域的权重，在索引阶段写入nrm文件，表明某些域比其他的域更重要。 </li>
    </ul>
    </li>
</ul>
<p>以上在Lucene的文档中已经详细提到，并在很多文章中也被阐述过，如何调整上面的各部分，以影响文档的打分，请参考<a href="http://www.javaeye.com/blog/591804">有关Lucene的问题(4):影响Lucene对文档打分的四种方式</a>一文。</p>
<p>然而上面各部分为什么要这样计算在一起呢？这么复杂的公式是怎么得出来的呢？下面我们来推导。</p>
<p>首先，将以上各部分代入score(q, d)公式，将得到一个非常复杂的公式，让我们忽略所有的boost，因为这些属于人为的调整，也省略coord，这和公式所要表达的原理无关。得到下面的公式：</p>
<p><img src="http://dl.javaeye.com/upload/attachment/213523/c5b2cdfd-8d14-3801-be4e-f738e20c04c9.png" alt="" /></p>
<p>&nbsp;</p>
<p>然后，有<a href="http://www.javaeye.com/blog/546771">Lucene学习总结之一：全文检索的基本原理</a>中的描述我们知道，Lucene的打分机制是采用向量空间模型的：</p>
<p>我们把文档看作一系列词(Term)，每一个词(Term)都有一个权重(Term weight)，不同的词(Term)根据自己在文档中的权重来影响文档相关性的打分计算。 </p>
<p>于是我们把所有此文档中词(term)的权重(term weight) 看作一个向量。 </p>
<p>Document = {term1, term2, &#8230;&#8230; ,term N} </p>
<p>Document Vector = {weight1, weight2, &#8230;&#8230; ,weight N} </p>
<p>同样我们把查询语句看作一个简单的文档，也用向量来表示。 </p>
<p>Query = {term1, term 2, &#8230;&#8230; , term N} </p>
<p>Query Vector = {weight1, weight2, &#8230;&#8230; , weight N} </p>
<p>我们把所有搜索出的文档向量及查询向量放到一个N维空间中，每个词(term)是一维。 </p>
<p><img src="http://dl.javaeye.com/upload/attachment/213525/8432e890-3798-3ede-9636-25de656449e3.png" alt="" /></p>
<p>&nbsp;</p>
<p>我们认为两个向量之间的夹角越小，相关性越大。 </p>
<p>所以我们计算夹角的余弦值作为相关性的打分，夹角越小，余弦值越大，打分越高，相关性越大。 </p>
<p>余弦公式如下：</p>
<p><img src="http://dl.javaeye.com/upload/attachment/213527/50a4335b-23c9-3105-b91b-7469c82a34df.png" alt="" /></p>
<p>&nbsp;</p>
<p>下面我们假设：</p>
<p>查询向量为Vq = &lt;w(t1, q), w(t2, q), &#8230;&#8230;, w(tn, q)&gt;</p>
<p>文档向量为Vd = &lt;w(t1, d), w(t2, d), &#8230;&#8230;, w(tn, d)&gt;</p>
<p>向量空间维数为n，是查询语句和文档的并集的长度，当某个Term不在查询语句中出现的时候，w(t, q)为零，当某个Term不在文档中出现的时候，w(t, d)为零。</p>
<p>w代表weight，计算公式一般为tf*idf。</p>
<p>我们首先计算余弦公式的分子部分，也即两个向量的点积：</p>
<p>Vq*Vd = w(t1, q)*w(t1, d) + w(t2, q)*w(t2, d) + &#8230;&#8230; + w(tn ,q)*w(tn, d) </p>
<p>把w的公式代入，则为</p>
<p>Vq*Vd = tf(t1, q)*idf(t1, q)*tf(t1, d)*idf(t1, d) + tf(t2, q)*idf(t2,
q)*tf(t2, d)*idf(t2, d) + &#8230;&#8230; + tf(tn ,q)*idf(tn, q)*tf(tn, d)*idf(tn,
d)</p>
<p>在这里有三点需要指出：</p>
<ul>
    <li>由于是点积，则此处的t1, t2, &#8230;&#8230;, tn只有查询语句和文档的并集有非零值，只在查询语句出现的或只在文档中出现的Term的项的值为零。 </li>
    <li>在查询的时候，很少有人会在查询语句中输入同样的词，因而可以假设tf(t, q)都为1 </li>
    <li>idf是指Term在多少篇文档中出现过，其中也包括查询语句这篇小文档，因而idf(t, q)和idf(t,
    d)其实是一样的，是索引中的文档总数加一，当索引中的文档总数足够大的时候，查询语句这篇小文档可以忽略，因而可以假设idf(t, q) =
    idf(t, d) = idf(t) </li>
</ul>
<p>基于上述三点，点积公式为：</p>
<p>Vq*Vd = tf(t1, d) * idf(t1) * idf(t1) + tf(t2, d) * idf(t2) * idf(t2) + &#8230;&#8230; + tf(tn, d) * idf(tn) * idf(tn)</p>
<p>所以余弦公式变为：</p>
<p><img src="http://dl.javaeye.com/upload/attachment/213529/8a6a10ef-63a1-3044-a26c-f8cc7326663b.png" alt="" /></p>
<p>&nbsp;</p>
<p>下面要推导的就是查询语句的长度了。</p>
<p>由上面的讨论，查询语句中tf都为1，idf都忽略查询语句这篇小文档，得到如下公式</p>
<p><img src="http://dl.javaeye.com/upload/attachment/213533/702d8d30-b2ba-323b-8271-de5b956ad75e.png" alt="" /></p>
<p>&nbsp;</p>
<p>所以余弦公式变为：</p>
<p><img src="http://dl.javaeye.com/upload/attachment/213535/9d71d606-8d67-35ad-ba47-95524e5f7219.png" alt="" /></p>
<p>&nbsp;</p>
<p>下面推导的就是文档的长度了，本来文档长度的公式应该如下：</p>
<p><img src="http://dl.javaeye.com/upload/attachment/213537/60cc56e4-fd91-3a0f-a40c-89cb09a58997.png" alt="" /></p>
<p>&nbsp;</p>
<p>这里需要讨论的是，为什么在打分过程中，需要除以文档的长度呢？</p>
<p>因为在索引中，不同的文档长度不一样，很显然，对于任意一个term，在长的文档中的tf要大的多，因而分数也越高，这样对小的文档不公平，举一个
极端的例子，在一篇1000万个词的鸿篇巨著中，"lucene"这个词出现了11次，而在一篇12个词的短小文档中，"lucene"这个词出现了10
次，如果不考虑长度在内，当然鸿篇巨著应该分数更高，然而显然这篇小文档才是真正关注"lucene"的。</p>
<p>然而如果按照标准的余弦计算公式，完全消除文档长度的影响，则又对长文档不公平(毕竟它是包含了更多的信息)，偏向于首先返回短小的文档的，这样在实际应用中使得搜索结果很难看。</p>
<p>所以在Lucene中，Similarity的lengthNorm接口是开放出来，用户可以根据自己应用的需要，改写lengthNorm的计算
公式。比如我想做一个经济学论文的搜索系统，经过一定时间的调研，发现大多数的经济学论文的长度在8000到10000词，因而lengthNorm的公
式应该是一个倒抛物线型的，8000到 10000词的论文分数最高，更短或更长的分数都应该偏低，方能够返回给用户最好的数据。</p>
<p>在默认状况下，Lucene采用DefaultSimilarity，认为在计算文档的向量长度的时候，每个Term的权重就不再考虑在内了，而是全部为一。</p>
<p>而从Term的定义我们可以知道，Term是包含域信息的，也即title:hello和content:hello是不同的Term，也即一个Term只可能在文档中的一个域中出现。</p>
<p>所以文档长度的公式为：</p>
<p><img src="http://dl.javaeye.com/upload/attachment/213539/39df6cd6-159a-3f92-aeb7-b7091393717c.png" alt="" /></p>
<p>&nbsp;</p>
<p>代入余弦公式：</p>
<p><img src="http://dl.javaeye.com/upload/attachment/213541/0e27890e-12c6-3b5d-864d-42bd703ea86c.png" alt="" /></p>
<p>&nbsp;</p>
<p>再加上各种boost和coord，则可得出Lucene的打分计算公式。</p>
<img src ="http://www.blogjava.net/ashutc/aggbug/327432.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ashutc/" target="_blank">西瓜</a> 2010-07-29 15:15 <a href="http://www.blogjava.net/ashutc/archive/2010/07/29/327432.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Lucene的检索优化（转）</title><link>http://www.blogjava.net/ashutc/archive/2010/07/19/326501.html</link><dc:creator>西瓜</dc:creator><author>西瓜</author><pubDate>Mon, 19 Jul 2010 03:46:00 GMT</pubDate><guid>http://www.blogjava.net/ashutc/archive/2010/07/19/326501.html</guid><wfw:comment>http://www.blogjava.net/ashutc/comments/326501.html</wfw:comment><comments>http://www.blogjava.net/ashutc/archive/2010/07/19/326501.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ashutc/comments/commentRss/326501.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ashutc/services/trackbacks/326501.html</trackback:ping><description><![CDATA[<p>而尽可能减少IndexSearcher的创建和对搜索结果的前台的缓存也是必要的。</p>
<p>Lucene面向全文检索的优化在于首次索引检索后，并不把所有的记录（Document）具体内容读取出来，而是只将所有结果中匹配度最高的头<br />
100条结果（TopDocs）的ID放到结果集缓存中并返回，这里可以比较一下数据库检索：如果是一个10,000条的数据库检索结果集，数据库是一定<br />
要把所有记录内容都取得以后再开始返回给应用结果集的。所以即使检索匹配总数很多，Lucene的结果集占用的内存空间也不会很多。对于一般的模糊检索应<br />
用是用不到这么多的结果的，头100条已经可以满足90%以上的检索需求。</p>
<p>如果首批缓存结果数用完后还要读取更后面的结果时Searcher会再次检索并生成一个上次的搜索缓存数大1倍的缓存，并再重新向后抓取。所以如果<br />
构造一个Searcher去查1－120条结果，Searcher其实是进行了2次搜索过程：头100条取完后，缓存结果用完，Searcher重新检索<br />
再构造一个200条的结果缓存，依此类推，400条缓存，800条缓存。由于每次Searcher对象消失后，这些缓存也访问那不到了，你有可能想将结果<br />
记录缓存下来，缓存数尽量保证在100以下以充分利用首次的结果缓存，不让Lucene浪费多次检索，而且可以分级进行结果缓存。</p>
<p>Lucene的另外一个特点是在收集结果的过程中将匹配度低的结果自动过滤掉了。这也是和数据库应用需要将搜索的结果全部返回不同之处。</p>
<p>刚刚开始学Lucene，看的是Lucene in<br />
Action。顺着看下去，很自然的就是使用Hits来访问Search的结果。但是使用起来，发现Search的速度是很快，不过如果结果很多的话（比
如1W个），通过Hits访问所有的结果速度非常慢，就是简单地从每个结果中读一个Field，在我的机器上用了接近2分钟。因为我的应用索引的只是我的
数据的两个域包含文本信息的域，我本希望通过Lucene查找出符合需求的数据ID，再通过ID去判断数据库中的其他域来决定最终的结果。这样连取ID就
需要2分钟，我的应用可受不了。</p>
<p>第一个想到的方法是把我的全部数据域都做成Lucene的索引，然后全部通过Lucene去搜索。但是由于我的很多域是数字，全部转换成
Lucene能接受的字符串，感觉性能不会好。另外如果我想针对搜索的结果做统计，也没法避免需要遍历全部的搜索结果，如果1W个结果就需要2分钟的话，
就算不用处理其他的域，也是不能忍受的。</p>
<p>开源软件的好处就是可以读代码。通过阅读Hits的代码，终于找到了解决问题的办法。</p>
<p>Lucene<br />
的代码看起来并不是特别Professional。比如下面这两个Hits的初始化函数。首先里面的q,s,f什么的让人看起来就不是太舒服（其他的代码
里还用i,j做循环变量）。其次这两个函数只有o那一个赋值不一样，明显应该只写一个，让另一个来调用。最后程序里面直接用了50这个常数，编程的大
忌。（50在其他函数里面也有）</p>
<p>Hits(Searcher s, Query q, Filter f) throws IOException {<br />
&nbsp;&nbsp;&nbsp; weight =<br />
q.weight(s);<br />
&nbsp;&nbsp;&nbsp; searcher =<br />
s;<br />
&nbsp;&nbsp;&nbsp; filter =<br />
f;<br />
&nbsp;&nbsp;&nbsp; nDeletions =<br />
countDeletions(s);<br />
&nbsp;&nbsp;&nbsp;<br />
getMoreDocs(50); // retrieve 100 initially<br />
&nbsp;&nbsp;&nbsp;<br />
lengthAtStart = length;<br />
&nbsp; }</p>
<p>&nbsp; Hits(Searcher s, Query q, Filter f, Sort o)<br />
throws IOException {<br />
&nbsp;&nbsp;&nbsp; weight =<br />
q.weight(s);<br />
&nbsp;&nbsp;&nbsp; searcher =<br />
s;<br />
&nbsp;&nbsp;&nbsp; filter =<br />
f;<br />
&nbsp;&nbsp;&nbsp; sort =<br />
o;<br />
&nbsp;&nbsp;&nbsp; nDeletions =<br />
countDeletions(s);<br />
&nbsp;&nbsp;&nbsp;<br />
getMoreDocs(50); // retrieve 100 initially<br />
&nbsp;&nbsp;&nbsp;<br />
lengthAtStart = length;<br />
&nbsp; }<br />
通过这两个函数，应该看出Hits初始化的时候只调入了前100个文档。</p>
<p>一般我们是通过Document doc(int<br />
n)函数来访问的。这个函数里面先判断了有多少数据已经被调入了，如果要访问的数据不在，就去调用getMoreDocs函数，getMoreDocs会
取得需要的2倍文档进来。</p>
<p>但是getMoreDocs的代码比较让人疑惑，里面一段代码是这样的：<br />
&nbsp;&nbsp;&nbsp; int n = min<br />
* 2;&nbsp;&nbsp;&nbsp; //<br />
double # retrieved<br />
&nbsp;&nbsp;&nbsp; TopDocs<br />
topDocs = (sort == null) ? searcher.search(weight, filter, n) :<br />
searcher.search(weight, filter, n, sort);<br />
这不成了每次翻倍的时候都要去调search重新查找吗？除非search里面有缓存，否则性能一定指数下降啊！<br />
实际上Hits最终使用的也是TopDocs，Searcher组合来实现输出结果，那不如我们来直接使用下层一点的对象了。我原来的代码是：</p>
<p>Hits hits = searcher.search(query);<br />
for( int i=0;i&lt;hits .length（）;i++) {<br />
&nbsp;&nbsp;&nbsp; Document doc<br />
= hits .doc(i );<br />
&nbsp;&nbsp;&nbsp;<br />
szTest.add(doc);<br />
}<br />
现在改为：<br />
TopDocs topDoc = searcher.search(query.weight(searcher), null,<br />
100000);//注意最后一个参数，是search返回的结果数量，应该比你最大可能返回的数量大，否则ScoreDoc里面就是你设置的数量。</p>
<p>ScoreDoc[] scoreDocs = topDoc.scoreDocs;<br />
for( int i=0;i&lt;scoreDocs.length;i++) {<br />
&nbsp;&nbsp;&nbsp; Document doc<br />
= searcher.doc(scoreDocs[i].doc );<br />
&nbsp;&nbsp;&nbsp;<br />
szTest.add(doc);<br />
}<br />
结果把12000个ID加入ArrayList用时0.4秒，快了几百倍。</p>
<p>等等，还没完。<br />
我只需要ID字段，但是返回整个Doc，其他两个文本Field也返回了。因为Lucene是倒索引保存信息的，每一个文本Field需要重新组合成原始
的字符串，这也是要耗时间的。searcher的doc函数有一个可以限定只取部分域的：</p>
<p>Document doc(int n, FieldSelector fieldSelector)</p>
<p>我下面定义一个FieldSelector，只取某一个给定名字的Field<br />
class SpecialFieldSelector implements FieldSelector {<br />
&nbsp;&nbsp;&nbsp; protected<br />
String m_szFieldName;<br />
&nbsp;&nbsp;&nbsp; public<br />
SpecialFieldSelector( String szFieldName ) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
m_szFieldName = szFieldName;<br />
&nbsp;&nbsp;&nbsp; }<br />
&nbsp;&nbsp;&nbsp;<br />
&nbsp;&nbsp;&nbsp; public<br />
FieldSelectorResult accept(String fieldName) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
if( fieldName.equalsIgnoreCase(m_szFieldName)) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
return&nbsp; FieldSelectorResult.LOAD;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
}<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
else {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
return&nbsp; FieldSelectorResult.NO_LOAD;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
}<br />
&nbsp;&nbsp;&nbsp;<br />
}&nbsp;&nbsp;&nbsp;<br />
}<br />
再修改我的代码：<br />
ScoreDoc[] scoreDocs = topDoc.scoreDocs;<br />
ArrayList&lt;Document&gt; szTest = new<br />
ArrayList&lt;Document&gt;();<br />
FieldSelector fieldSelector = new<br />
SpecialFieldSelector(FIELD_ID);<br />
for( int i=0;i&lt;scoreDocs.length;i++) {<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
Document doc = searcher.doc(scoreDocs[i].doc, fieldSelector);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br />
szTest.add(doc);<br />
}<br />
现在返回1.2W个ID耗时0.25秒。虽然比前面只少了大约150毫秒，但是是接近40%的提高了，在负载比较大的应用中还是很重要的。</p>
<br />
<br />
注：<br />
&nbsp;&nbsp; 有些可以借鉴的<br />
<br />
<br />
<br />
<br />
<br />
<img src ="http://www.blogjava.net/ashutc/aggbug/326501.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ashutc/" target="_blank">西瓜</a> 2010-07-19 11:46 <a href="http://www.blogjava.net/ashutc/archive/2010/07/19/326501.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Lucene笔记</title><link>http://www.blogjava.net/ashutc/archive/2010/07/16/326274.html</link><dc:creator>西瓜</dc:creator><author>西瓜</author><pubDate>Fri, 16 Jul 2010 03:04:00 GMT</pubDate><guid>http://www.blogjava.net/ashutc/archive/2010/07/16/326274.html</guid><wfw:comment>http://www.blogjava.net/ashutc/comments/326274.html</wfw:comment><comments>http://www.blogjava.net/ashutc/archive/2010/07/16/326274.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ashutc/comments/commentRss/326274.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ashutc/services/trackbacks/326274.html</trackback:ping><description><![CDATA[建议先将关键词进行分词<br />
<br />
<div style="background-color: #eeeeee; font-size: 13px; border: 1px solid #cccccc; padding: 4px 5px 4px 4px; width: 98%;"><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
--><span style="color: #000000;">&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #008000;">//</span><span style="color: #008000;">&nbsp;tokenStream分词</span><span style="color: #008000;"><br />
</span><span style="color: #000000;">&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;">public</span><span style="color: #000000;">&nbsp;</span><span style="color: #0000ff;">static</span><span style="color: #000000;">&nbsp;String&nbsp;analyze(Analyzer&nbsp;analyzer,&nbsp;String&nbsp;keyword)&nbsp;</span><span style="color: #0000ff;">throws</span><span style="color: #000000;">&nbsp;IOException&nbsp;{<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;StringBuffer&nbsp;sb&nbsp;</span><span style="color: #000000;">=</span><span style="color: #000000;">&nbsp;</span><span style="color: #0000ff;">new</span><span style="color: #000000;">&nbsp;StringBuffer();<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;TokenStream&nbsp;tokenStream&nbsp;</span><span style="color: #000000;">=</span><span style="color: #000000;">&nbsp;analyzer.tokenStream(</span><span style="color: #000000;">"</span><span style="color: #000000;">content</span><span style="color: #000000;">"</span><span style="color: #000000;">,&nbsp;</span><span style="color: #0000ff;">new</span><span style="color: #000000;">&nbsp;StringReader(keyword));<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;">for</span><span style="color: #000000;">&nbsp;(Token&nbsp;token&nbsp;</span><span style="color: #000000;">=</span><span style="color: #000000;">&nbsp;</span><span style="color: #0000ff;">new</span><span style="color: #000000;">&nbsp;Token();&nbsp;(token&nbsp;</span><span style="color: #000000;">=</span><span style="color: #000000;">&nbsp;tokenStream.next(token))&nbsp;</span><span style="color: #000000;">!=</span><span style="color: #000000;">&nbsp;</span><span style="color: #0000ff;">null</span><span style="color: #000000;">;)&nbsp;{<br />
<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;sb.append(token.term()&nbsp;</span><span style="color: #000000;">+</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">"</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">"</span><span style="color: #000000;">);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;">return</span><span style="color: #000000;">&nbsp;sb.toString();<br />
<br />
&nbsp;&nbsp;&nbsp;&nbsp;}</span></div>
<br />
设置关键词之间空格的与或关系<br />
<br />
<div style="background-color: #eeeeee; font-size: 13px; border: 1px solid #cccccc; padding: 4px 5px 4px 4px; width: 98%;"><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
--><span style="color: #000000;">queryParser_and.setDefaultOperator(QueryParser.AND_OPERATOR);<br />
queryParser_or.setDefaultOperator(QueryParser.OR_OPERATOR);</span></div>
<br />
<br />
过滤特殊字符<br />
<br />
<div style="background-color: #eeeeee; font-size: 13px; border: 1px solid #cccccc; padding: 4px 5px 4px 4px; width: 98%;"><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
--><span style="color: #000000;">&nbsp;&nbsp;&nbsp;&nbsp;Query&nbsp;query_and&nbsp;</span><span style="color: #000000;">=</span><span style="color: #000000;">&nbsp;queryParser_and.parse(QueryParser.escape(keyword));</span></div>
<br />
<br />
遇到多余一个空格后的处理<br />
<br />
<div style="background-color: #eeeeee; font-size: 13px; border: 1px solid #cccccc; padding: 4px 5px 4px 4px; width: 98%;"><!--<br />
<br />
Code highlighting produced by Actipro CodeHighlighter (freeware)<br />
http://www.CodeHighlighter.com/<br />
<br />
--><span style="color: #000000;">&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #008000;">/**</span><span style="color: #008000;"><br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;把超过一个空格后面的转化为&nbsp;OR&nbsp;号表达式<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;</span><span style="color: #808080;">@param</span><span style="color: #008000;">&nbsp;wd<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;</span><span style="color: #808080;">@return</span><span style="color: #008000;">&nbsp;eg:&nbsp;ibm&nbsp;t60&nbsp;mp3&nbsp;液晶&nbsp;ibm&nbsp;t60&nbsp;OR&nbsp;mp3&nbsp;OR&nbsp;液晶<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #008000;">*/</span><span style="color: #000000;"><br />
&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;">public</span><span style="color: #000000;">&nbsp;</span><span style="color: #0000ff;">static</span><span style="color: #000000;">&nbsp;String&nbsp;nvl(String&nbsp;value)&nbsp;{<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;">return</span><span style="color: #000000;">&nbsp;value&nbsp;</span><span style="color: #000000;">==</span><span style="color: #000000;">&nbsp;</span><span style="color: #0000ff;">null</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">?</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">""</span><span style="color: #000000;">&nbsp;:&nbsp;value;<br />
&nbsp;&nbsp;&nbsp;&nbsp;}<br />
<br />
&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;">public</span><span style="color: #000000;">&nbsp;</span><span style="color: #0000ff;">static</span><span style="color: #000000;">&nbsp;String&nbsp;parseWd(String&nbsp;wd)&nbsp;{<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String&nbsp;retwd&nbsp;</span><span style="color: #000000;">=</span><span style="color: #000000;">&nbsp;nvl(wd).replaceAll(</span><span style="color: #000000;">"</span><span style="color: #000000;">　</span><span style="color: #000000;">"</span><span style="color: #000000;">,&nbsp;</span><span style="color: #000000;">"</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">"</span><span style="color: #000000;">).replaceAll(</span><span style="color: #000000;">"</span><span style="color: #000000;">&nbsp;&nbsp;</span><span style="color: #000000;">"</span><span style="color: #000000;">,&nbsp;</span><span style="color: #000000;">"</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">"</span><span style="color: #000000;">);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;String[]&nbsp;arr&nbsp;</span><span style="color: #000000;">=</span><span style="color: #000000;">&nbsp;nvl(retwd).split(</span><span style="color: #000000;">"</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">"</span><span style="color: #000000;">);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;">if</span><span style="color: #000000;">&nbsp;(arr&nbsp;</span><span style="color: #000000;">!=</span><span style="color: #000000;">&nbsp;</span><span style="color: #0000ff;">null</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">&amp;&amp;</span><span style="color: #000000;">&nbsp;arr.length&nbsp;</span><span style="color: #000000;">&gt;</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">2</span><span style="color: #000000;">)&nbsp;{<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;retwd&nbsp;</span><span style="color: #000000;">=</span><span style="color: #000000;">&nbsp;(arr[</span><span style="color: #000000;">0</span><span style="color: #000000;">].trim().equals(</span><span style="color: #000000;">"</span><span style="color: #000000;">OR</span><span style="color: #000000;">"</span><span style="color: #000000;">)&nbsp;</span><span style="color: #000000;">?</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">""</span><span style="color: #000000;">&nbsp;:&nbsp;arr[</span><span style="color: #000000;">0</span><span style="color: #000000;">]&nbsp;</span><span style="color: #000000;">+</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">"</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">"</span><span style="color: #000000;">)&nbsp;</span><span style="color: #000000;">+</span><span style="color: #000000;">&nbsp;(arr[</span><span style="color: #000000;">1</span><span style="color: #000000;">].trim().equals(</span><span style="color: #000000;">"</span><span style="color: #000000;">OR</span><span style="color: #000000;">"</span><span style="color: #000000;">)&nbsp;</span><span style="color: #000000;">?</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">""</span><span style="color: #000000;">&nbsp;:&nbsp;arr[</span><span style="color: #000000;">1</span><span style="color: #000000;">]);<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;">for</span><span style="color: #000000;">&nbsp;(</span><span style="color: #0000ff;">int</span><span style="color: #000000;">&nbsp;i&nbsp;</span><span style="color: #000000;">=</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">2</span><span style="color: #000000;">;&nbsp;i&nbsp;</span><span style="color: #000000;">&lt;</span><span style="color: #000000;">&nbsp;arr.length;&nbsp;i</span><span style="color: #000000;">++</span><span style="color: #000000;">)&nbsp;{<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;">if</span><span style="color: #000000;">&nbsp;(</span><span style="color: #000000;">!</span><span style="color: #000000;">arr[i].trim().equals(</span><span style="color: #000000;">"</span><span style="color: #000000;">OR</span><span style="color: #000000;">"</span><span style="color: #000000;">))&nbsp;{<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;retwd&nbsp;</span><span style="color: #000000;">+=</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">"</span><span style="color: #000000;">&nbsp;OR&nbsp;</span><span style="color: #000000;">"</span><span style="color: #000000;">&nbsp;</span><span style="color: #000000;">+</span><span style="color: #000000;">&nbsp;arr[i];<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}<br />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;">return</span><span style="color: #000000;">&nbsp;retwd;<br />
&nbsp;&nbsp;&nbsp;&nbsp;}<br />
</span></div>
<br />
<br />
<img src ="http://www.blogjava.net/ashutc/aggbug/326274.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ashutc/" target="_blank">西瓜</a> 2010-07-16 11:04 <a href="http://www.blogjava.net/ashutc/archive/2010/07/16/326274.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Lucene 2.9.0 使用</title><link>http://www.blogjava.net/ashutc/archive/2010/07/12/325844.html</link><dc:creator>西瓜</dc:creator><author>西瓜</author><pubDate>Mon, 12 Jul 2010 03:49:00 GMT</pubDate><guid>http://www.blogjava.net/ashutc/archive/2010/07/12/325844.html</guid><wfw:comment>http://www.blogjava.net/ashutc/comments/325844.html</wfw:comment><comments>http://www.blogjava.net/ashutc/archive/2010/07/12/325844.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.blogjava.net/ashutc/comments/commentRss/325844.html</wfw:commentRss><trackback:ping>http://www.blogjava.net/ashutc/services/trackbacks/325844.html</trackback:ping><description><![CDATA[<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;"><font style="word-wrap: break-word;" face="宋体">最新2.9的IndexWriter 建立方式：</font></p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;"><font style="word-wrap: break-word;" face="宋体">Directory directory = new SimpleFSDirectory(new
File(path),new SimpleFSLockFactory());&nbsp; // 先要建立directory&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
IndexWriter writer = new IndexWriter(directory,new WhitespaceAnalyzer(),
cover,IndexWriter.MaxFieldLength.UNLIMITED); //
这里最大字段长度无限（大字段是content），cover为true表示覆盖写用于初始化，false用于更新，这里就用
WhitespaceAnalyzer()分词器&nbsp;&nbsp;<br style="word-wrap: break-word;" />
Directory directory = new SimpleFSDirectory(new File(path),new
SimpleFSLockFactory());&nbsp; // 先要建立directory<br style="word-wrap: break-word;" />
IndexWriter writer = new IndexWriter(directory,new WhitespaceAnalyzer(),
cover,IndexWriter.MaxFieldLength.UNLIMITED); //
这里最大字段长度无限（大字段是content），cover为true表示覆盖写用于初始化，false用于更新，这里就用
WhitespaceAnalyzer()分词器</font></p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;"><font style="word-wrap: break-word;" face="宋体">IndexWriter 参数调整</font></p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;"><font style="word-wrap: break-word;" face="宋体">writer.setMergeFactor(50); // 多少个合并一次&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
writer.setMaxMergeDocs(5000); // 一个segment最多有多少个document．nbsp;&nbsp;<br style="word-wrap: break-word;" />
writer.setMergeFactor(50); // 多少个合并一次<br style="word-wrap: break-word;" />
writer.setMaxMergeDocs(5000); // 一个segment最多有多少个document．/font&gt;</font></p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;"><font style="word-wrap: break-word;" face="宋体">&nbsp;</font></p>
<font style="word-wrap: break-word;" face="宋体"><font style="word-wrap: break-word;" face="宋体">
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;"><br style="word-wrap: break-word;" />
把其他格式转化为lucene需要的document．式</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">document．doc = new
document．);&nbsp; //每一个doc相当于数据库的一条记录&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
doc.add(new Field("uid", line.getUid().toString(),
Store.YES,Index.NO));&nbsp; //每一个field，相当于数据库的字段&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
&nbsp;&nbsp;<br style="word-wrap: break-word;" />
doc.add(new Field("title", line.getTitle(),
Store.NO,Index.ANALYZED));&nbsp;&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
doc.add(new Field("content", line.getContent(),Store.NO,
Index.ANALYZED));&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
document．doc = new document．);&nbsp; //每一个doc相当于数据库的一条记录<br style="word-wrap: break-word;" />
doc.add(new Field("uid", line.getUid().toString(),
Store.YES,Index.NO));&nbsp; //每一个field，相当于数据库的字段</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">doc.add(new Field("title",
line.getTitle(), Store.NO,Index.ANALYZED));&nbsp;<br style="word-wrap: break-word;" />
doc.add(new Field("content", line.getContent(),Store.NO,
Index.ANALYZED));&nbsp;</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">向IndexWriter添加doc，可以插入多条doc</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">writer.adddocument．doc);&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
writer.adddocument．doc2);&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
writer.adddocument．doc3);&nbsp;&nbsp;<br style="word-wrap: break-word;" />
writer.adddocument．doc);<br style="word-wrap: break-word;" />
writer.adddocument．doc2);<br style="word-wrap: break-word;" />
writer.adddocument．doc3);</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">开始写入（close的时候为实际写入过程）</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">writer.close();&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
writer = null;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
writer.close();<br style="word-wrap: break-word;" />
writer = null;</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">读取写入的索引数</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">writer.numDocs()&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
writer.maxDoc()&nbsp;&nbsp;<br style="word-wrap: break-word;" />
writer.numDocs()<br style="word-wrap: break-word;" />
writer.maxDoc()</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">在close之前可以进行优化（不建议在建立索引时候使用）</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">writer.optimize()</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">2、清空索引<br style="word-wrap: break-word;" />
Directory directory = new SimpleFSDirectory(new File(path),new
SimpleFSLockFactory());&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
IndexWriter.unlock(directory);&nbsp; //关键是这一步要进行目录解锁，这里解的是write.lock锁&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
IndexWriter writer = new IndexWriter(directory,new WhitespaceAnalyzer(),
false,IndexWriter.MaxFieldLength.LIMITED);&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
writer.deleteAll();&nbsp; //标识删除全部&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
writer.optimize();&nbsp; //这个步骤才是实际删除的过程&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
writer.close();&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
Directory directory = new SimpleFSDirectory(new File(path),new
SimpleFSLockFactory());<br style="word-wrap: break-word;" />
IndexWriter.unlock(directory);&nbsp; //关键是这一步要进行目录解锁，这里解的是write.lock锁<br style="word-wrap: break-word;" />
IndexWriter writer = new IndexWriter(directory,new WhitespaceAnalyzer(),
false,IndexWriter.MaxFieldLength.LIMITED);<br style="word-wrap: break-word;" />
writer.deleteAll();&nbsp; //标识删除全部<br style="word-wrap: break-word;" />
writer.optimize();&nbsp; //这个步骤才是实际删除的过程<br style="word-wrap: break-word;" />
writer.close();&nbsp;</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">3、删除指定索引（和清空差不多）<br style="word-wrap: break-word;" />
writer.deletedocument．(new Term("uri", uri));&nbsp; //这里是删除term满足条件的一条或多条&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
writer.deletedocument．(query); //这里是删除一个查询出来的内容&nbsp;&nbsp;<br style="word-wrap: break-word;" />
writer.deletedocument．(new Term("uri", uri));&nbsp; //这里是删除term满足条件的一条或多条<br style="word-wrap: break-word;" />
writer.deletedocument．(query); //这里是删除一个查询出来的内容</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">4、更新索引<br style="word-wrap: break-word;" />
就是先删除再添加的过程，没有直接update的办法</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">5、读取建立的索引分词<br style="word-wrap: break-word;" />
TermEnum terms = indexReader.terms(new Term(index, ""));&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
Term term = terms.term();&nbsp; //获取一条索引&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
term().field(); //获取索引的field（字段名）&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
term().text(); //获取索引的值&nbsp;&nbsp;<br style="word-wrap: break-word;" />
TermEnum terms = indexReader.terms(new Term(index, ""));<br style="word-wrap: break-word;" />
Term term = terms.term();&nbsp; //获取一条索引<br style="word-wrap: break-word;" />
term().field(); //获取索引的field（字段名）<br style="word-wrap: break-word;" />
term().text(); //获取索引的值</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">6、搜索<br style="word-wrap: break-word;" />
最新2.9的IndexSearcher 建立方式：</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">Directory directory = new
SimpleFSDirectory(new File(path),new SimpleFSLockFactory());&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
IndexSearcher indexSearcher = new IndexSearcher(directory, true);&nbsp;&nbsp;<br style="word-wrap: break-word;" />
Directory directory = new SimpleFSDirectory(new File(path),new
SimpleFSLockFactory());<br style="word-wrap: break-word;" />
IndexSearcher indexSearcher = new IndexSearcher(directory, true);</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">创建查询条件（这里建一个最复杂的，根据多个限定条件查找，并
且有的限定条件放在多个field中查找，有精确限定和范围限定）</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">&nbsp;</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;"><br style="word-wrap: break-word;" />
BooleanQuery bQuery = new BooleanQuery();&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
Query query1 = null, query2 = null, query3 = null;&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
BooleanClause.Occur[] flags = new BooleanClause.Occur[]
{BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD };&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
query1 = MultiFieldQueryParser.parse(params.get("keywords"),new String[]
{ "title", "content" }, flags, new WhitespaceAnalyzer());&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
bQuery.add(query1, Occur.MUST); //query1是把关键字分别在title和content中匹配！&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
query2 = new TermQuery(new Term("startgui", params.get("startgui")));&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
bQuery.add(query2, Occur.MUST); //query2是精确匹配&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
Long minPriceLong = Long.parseLong(params.get("minPrice"));&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
Long maxPriceLong = Long.parseLong(params.get("maxPrice"));&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
query5 = NumericRangeQuery.newLongRange("price", minPriceLong,&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
maxPriceLong, true, true);&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
bQuery.add(query5, Occur.MUST);&nbsp; //query3是按范围匹配&nbsp;&nbsp;<br style="word-wrap: break-word;" />
BooleanQuery bQuery = new BooleanQuery();<br style="word-wrap: break-word;" />
Query query1 = null, query2 = null, query3 = null;<br style="word-wrap: break-word;" />
BooleanClause.Occur[] flags = new BooleanClause.Occur[]
{BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD };<br style="word-wrap: break-word;" />
query1 = MultiFieldQueryParser.parse(params.get("keywords"),new String[]
{ "title", "content" }, flags, new WhitespaceAnalyzer());<br style="word-wrap: break-word;" />
bQuery.add(query1, Occur.MUST); //query1是把关键字分别在title和content中匹配！<br style="word-wrap: break-word;" />
query2 = new TermQuery(new Term("startgui", params.get("startgui")));<br style="word-wrap: break-word;" />
bQuery.add(query2, Occur.MUST); //query2是精确匹配<br style="word-wrap: break-word;" />
Long minPriceLong = Long.parseLong(params.get("minPrice"));<br style="word-wrap: break-word;" />
Long maxPriceLong = Long.parseLong(params.get("maxPrice"));<br style="word-wrap: break-word;" />
query5 = NumericRangeQuery.newLongRange("price", minPriceLong,<br style="word-wrap: break-word;" />
maxPriceLong, true, true);<br style="word-wrap: break-word;" />
bQuery.add(query5, Occur.MUST);&nbsp; //query3是按范围匹配<br style="word-wrap: break-word;" />
&nbsp;</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">排序情况</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">&nbsp;</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">SortField[] sortField = new
SortField[] { SortField.FIELD_SCORE,new SortField(null, SortField.DOC,
true) }; // 默认排序&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
SortField sortPriceField = new SortField("sortPrice",SortField.LONG,
sortPrice);&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
sortField = new SortField[] { sortPriceField,SortField.FIELD_SCORE,new
SortField(null, SortField.DOC, true) };&nbsp; //按自定义价格排序&nbsp;&nbsp;<br style="word-wrap: break-word;" />
SortField[] sortField = new SortField[] { SortField.FIELD_SCORE,new
SortField(null, SortField.DOC, true) }; // 默认排序<br style="word-wrap: break-word;" />
SortField sortPriceField = new SortField("sortPrice",SortField.LONG,
sortPrice);<br style="word-wrap: break-word;" />
sortField = new SortField[] { sortPriceField,SortField.FIELD_SCORE,new
SortField(null, SortField.DOC, true) };&nbsp; //按自定义价格排序<br style="word-wrap: break-word;" />
&nbsp;</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">2.9最新查询方式，只是获取id</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">TopFieldDocs docs =
indexSearcher.search(query, null, indexSearcher.maxDoc(), new
Sort(sortField));&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
ScoreDoc[] scoreDocs = docs.scoreDocs;&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
docCount = scoreDocs.length;&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
TopFieldDocs docs = indexSearcher.search(query, null,
indexSearcher.maxDoc(), new Sort(sortField));<br style="word-wrap: break-word;" />
ScoreDoc[] scoreDocs = docs.scoreDocs;<br style="word-wrap: break-word;" />
docCount = scoreDocs.length;&nbsp;</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">加入分页</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">List&lt;document．gt; docList =
new ArrayList&lt;document．gt;();&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
int max = ((startIndex + pageSize) &gt;= docCount) ? docCount :
(startIndex + pageSize); // max防止arrayindexoutofbounds&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
for (int i = startIndex; i &lt; max; i++) {&nbsp;&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
&nbsp;&nbsp;&nbsp; ScoreDoc scoredoc = scoreDocs[i];&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
&nbsp;&nbsp;&nbsp; document．doc = indexSearcher.doc(scoredoc.doc); // 新的使用方法&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
&nbsp;&nbsp;&nbsp; docList.add(doc);&nbsp;&nbsp;&nbsp;<br style="word-wrap: break-word;" />
}&nbsp;&nbsp;<br style="word-wrap: break-word;" />
List&lt;document．gt; docList = new ArrayList&lt;document．gt;();<br style="word-wrap: break-word;" />
int max = ((startIndex + pageSize) &gt;= docCount) ? docCount :
(startIndex + pageSize); // max防止arrayindexoutofbounds<br style="word-wrap: break-word;" />
for (int i = startIndex; i &lt; max; i++) {&nbsp;<br style="word-wrap: break-word;" />
&nbsp;ScoreDoc scoredoc = scoreDocs[i];<br style="word-wrap: break-word;" />
&nbsp;document．doc = indexSearcher.doc(scoredoc.doc); // 新的使用方法<br style="word-wrap: break-word;" />
&nbsp;docList.add(doc);<br style="word-wrap: break-word;" />
}<br style="word-wrap: break-word;" />
&nbsp;</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">循环解析docList中的document．取所需要的值</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">doc.get("title");</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">...</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">7、关于分词<br style="word-wrap: break-word;" />
注意建立索引和搜索时候的analyzer必须一致，而且建立索引和搜索时候目录也要保持一致</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">lucene自带的一些分词器</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">StandardAnalyzer()
会按空格和标点符号划分</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">WhitespaceAnalyzer() 会按空格划分</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">中文分词这里使用的是paoding的中文分词</p>
<p style="word-wrap: break-word; margin: 0px 0px 1em; padding: 0px; border-width: 0px; list-style-type: none;">是先按词库划分，当词库中不存在时按二分法进行划分</p>
</font></font><br />
<br />
<br />
<br />
<br />
<img src ="http://www.blogjava.net/ashutc/aggbug/325844.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.blogjava.net/ashutc/" target="_blank">西瓜</a> 2010-07-12 11:49 <a href="http://www.blogjava.net/ashutc/archive/2010/07/12/325844.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item></channel></rss>