最新2.9的IndexWriter 建立方式:

Directory directory = new SimpleFSDirectory(new File(path),new SimpleFSLockFactory());  // 先要建立directory   
IndexWriter writer = new IndexWriter(directory,new WhitespaceAnalyzer(), cover,IndexWriter.MaxFieldLength.UNLIMITED); // 这里最大字段长度无限(大字段是content),cover为true表示覆盖写用于初始化,false用于更新,这里就用 WhitespaceAnalyzer()分词器  
Directory directory = new SimpleFSDirectory(new File(path),new SimpleFSLockFactory());  // 先要建立directory
IndexWriter writer = new IndexWriter(directory,new WhitespaceAnalyzer(), cover,IndexWriter.MaxFieldLength.UNLIMITED); // 这里最大字段长度无限(大字段是content),cover为true表示覆盖写用于初始化,false用于更新,这里就用 WhitespaceAnalyzer()分词器

IndexWriter 参数调整

writer.setMergeFactor(50); // 多少个合并一次   
writer.setMaxMergeDocs(5000); // 一个segment最多有多少个document.nbsp; 
writer.setMergeFactor(50); // 多少个合并一次
writer.setMaxMergeDocs(5000); // 一个segment最多有多少个document./font>

 


把其他格式转化为lucene需要的document.式

document.doc = new document.);  //每一个doc相当于数据库的一条记录   
doc.add(new Field("uid", line.getUid().toString(), Store.YES,Index.NO));  //每一个field,相当于数据库的字段   
  
doc.add(new Field("title", line.getTitle(), Store.NO,Index.ANALYZED));    
doc.add(new Field("content", line.getContent(),Store.NO, Index.ANALYZED));   
document.doc = new document.);  //每一个doc相当于数据库的一条记录
doc.add(new Field("uid", line.getUid().toString(), Store.YES,Index.NO));  //每一个field,相当于数据库的字段

doc.add(new Field("title", line.getTitle(), Store.NO,Index.ANALYZED)); 
doc.add(new Field("content", line.getContent(),Store.NO, Index.ANALYZED)); 

向IndexWriter添加doc,可以插入多条doc

writer.adddocument.doc);   
writer.adddocument.doc2);   
writer.adddocument.doc3);  
writer.adddocument.doc);
writer.adddocument.doc2);
writer.adddocument.doc3);

开始写入(close的时候为实际写入过程)

writer.close();   
writer = null;  
writer.close();
writer = null;

读取写入的索引数

writer.numDocs()   
writer.maxDoc()  
writer.numDocs()
writer.maxDoc()

在close之前可以进行优化(不建议在建立索引时候使用)

writer.optimize()

2、清空索引
Directory directory = new SimpleFSDirectory(new File(path),new SimpleFSLockFactory());   
IndexWriter.unlock(directory);  //关键是这一步要进行目录解锁,这里解的是write.lock锁   
IndexWriter writer = new IndexWriter(directory,new WhitespaceAnalyzer(), false,IndexWriter.MaxFieldLength.LIMITED);   
writer.deleteAll();  //标识删除全部   
writer.optimize();  //这个步骤才是实际删除的过程   
writer.close();   
Directory directory = new SimpleFSDirectory(new File(path),new SimpleFSLockFactory());
IndexWriter.unlock(directory);  //关键是这一步要进行目录解锁,这里解的是write.lock锁
IndexWriter writer = new IndexWriter(directory,new WhitespaceAnalyzer(), false,IndexWriter.MaxFieldLength.LIMITED);
writer.deleteAll();  //标识删除全部
writer.optimize();  //这个步骤才是实际删除的过程
writer.close(); 

3、删除指定索引(和清空差不多)
writer.deletedocument.(new Term("uri", uri));  //这里是删除term满足条件的一条或多条   
writer.deletedocument.(query); //这里是删除一个查询出来的内容  
writer.deletedocument.(new Term("uri", uri));  //这里是删除term满足条件的一条或多条
writer.deletedocument.(query); //这里是删除一个查询出来的内容

4、更新索引
就是先删除再添加的过程,没有直接update的办法

5、读取建立的索引分词
TermEnum terms = indexReader.terms(new Term(index, ""));   
Term term = terms.term();  //获取一条索引   
term().field(); //获取索引的field(字段名)   
term().text(); //获取索引的值  
TermEnum terms = indexReader.terms(new Term(index, ""));
Term term = terms.term();  //获取一条索引
term().field(); //获取索引的field(字段名)
term().text(); //获取索引的值

6、搜索
最新2.9的IndexSearcher 建立方式:

Directory directory = new SimpleFSDirectory(new File(path),new SimpleFSLockFactory());   
IndexSearcher indexSearcher = new IndexSearcher(directory, true);  
Directory directory = new SimpleFSDirectory(new File(path),new SimpleFSLockFactory());
IndexSearcher indexSearcher = new IndexSearcher(directory, true);

创建查询条件(这里建一个最复杂的,根据多个限定条件查找,并 且有的限定条件放在多个field中查找,有精确限定和范围限定)

 


BooleanQuery bQuery = new BooleanQuery();   
Query query1 = null, query2 = null, query3 = null;   
BooleanClause.Occur[] flags = new BooleanClause.Occur[] {BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD };   
query1 = MultiFieldQueryParser.parse(params.get("keywords"),new String[] { "title", "content" }, flags, new WhitespaceAnalyzer());   
bQuery.add(query1, Occur.MUST); //query1是把关键字分别在title和content中匹配!   
query2 = new TermQuery(new Term("startgui", params.get("startgui")));   
bQuery.add(query2, Occur.MUST); //query2是精确匹配   
Long minPriceLong = Long.parseLong(params.get("minPrice"));   
Long maxPriceLong = Long.parseLong(params.get("maxPrice"));   
query5 = NumericRangeQuery.newLongRange("price", minPriceLong,   
maxPriceLong, true, true);   
bQuery.add(query5, Occur.MUST);  //query3是按范围匹配  
BooleanQuery bQuery = new BooleanQuery();
Query query1 = null, query2 = null, query3 = null;
BooleanClause.Occur[] flags = new BooleanClause.Occur[] {BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD };
query1 = MultiFieldQueryParser.parse(params.get("keywords"),new String[] { "title", "content" }, flags, new WhitespaceAnalyzer());
bQuery.add(query1, Occur.MUST); //query1是把关键字分别在title和content中匹配!
query2 = new TermQuery(new Term("startgui", params.get("startgui")));
bQuery.add(query2, Occur.MUST); //query2是精确匹配
Long minPriceLong = Long.parseLong(params.get("minPrice"));
Long maxPriceLong = Long.parseLong(params.get("maxPrice"));
query5 = NumericRangeQuery.newLongRange("price", minPriceLong,
maxPriceLong, true, true);
bQuery.add(query5, Occur.MUST);  //query3是按范围匹配
 

排序情况

 

SortField[] sortField = new SortField[] { SortField.FIELD_SCORE,new SortField(null, SortField.DOC, true) }; // 默认排序   
SortField sortPriceField = new SortField("sortPrice",SortField.LONG, sortPrice);   
sortField = new SortField[] { sortPriceField,SortField.FIELD_SCORE,new SortField(null, SortField.DOC, true) };  //按自定义价格排序  
SortField[] sortField = new SortField[] { SortField.FIELD_SCORE,new SortField(null, SortField.DOC, true) }; // 默认排序
SortField sortPriceField = new SortField("sortPrice",SortField.LONG, sortPrice);
sortField = new SortField[] { sortPriceField,SortField.FIELD_SCORE,new SortField(null, SortField.DOC, true) };  //按自定义价格排序
 

2.9最新查询方式,只是获取id

TopFieldDocs docs = indexSearcher.search(query, null, indexSearcher.maxDoc(), new Sort(sortField));   
ScoreDoc[] scoreDocs = docs.scoreDocs;   
docCount = scoreDocs.length;   
TopFieldDocs docs = indexSearcher.search(query, null, indexSearcher.maxDoc(), new Sort(sortField));
ScoreDoc[] scoreDocs = docs.scoreDocs;
docCount = scoreDocs.length; 

加入分页

List<document.gt; docList = new ArrayList<document.gt;();   
int max = ((startIndex + pageSize) >= docCount) ? docCount : (startIndex + pageSize); // max防止arrayindexoutofbounds   
for (int i = startIndex; i < max; i++) {    
    ScoreDoc scoredoc = scoreDocs[i];   
    document.doc = indexSearcher.doc(scoredoc.doc); // 新的使用方法   
    docList.add(doc);   
}  
List<document.gt; docList = new ArrayList<document.gt;();
int max = ((startIndex + pageSize) >= docCount) ? docCount : (startIndex + pageSize); // max防止arrayindexoutofbounds
for (int i = startIndex; i < max; i++) { 
 ScoreDoc scoredoc = scoreDocs[i];
 document.doc = indexSearcher.doc(scoredoc.doc); // 新的使用方法
 docList.add(doc);
}
 

循环解析docList中的document.取所需要的值

doc.get("title");

...

7、关于分词
注意建立索引和搜索时候的analyzer必须一致,而且建立索引和搜索时候目录也要保持一致

lucene自带的一些分词器

StandardAnalyzer() 会按空格和标点符号划分

WhitespaceAnalyzer() 会按空格划分

中文分词这里使用的是paoding的中文分词

是先按词库划分,当词库中不存在时按二分法进行划分