If you are using the StandardAnalyzer
, that will discard non-alphanum characters. Try indexing the same value with a WhitespaceAnalyzer
and see if that preserves the characters you need. It might also keep stuff you don't want: that's when you might consider writing your own Analyzer, which basically means creating a TokenStream stack that does exactly the kind of processing you need.
For example, the SimpleAnalyzer
implements the following pipeline:
@Override
public TokenStream tokenStream(String fieldName, Reader reader) {
return new LowerCaseTokenizer(reader);
}
which just lower-cases the tokens.
The StandardAnalyzer
does much more:
/** Constructs a {@link StandardTokenizer} filtered by a {@link
StandardFilter}, a {@link LowerCaseFilter} and a {@link StopFilter}. */
@Override
public TokenStream tokenStream(String fieldName, Reader reader) {
StandardTokenizer tokenStream = new StandardTokenizer(matchVersion, reader);
tokenStream.setMaxTokenLength(maxTokenLength);
TokenStream result = new StandardFilter(tokenStream);
result = new LowerCaseFilter(result);
result = new StopFilter(enableStopPositionIncrements, result, stopSet);
return result;
}
You can mix & match from these and other components in org.apache.lucene.analysis
, or you can write your own specialized TokenStream
instances that are wrapped into a processing pipeline by your custom Analyzer
.
One other thing to look at is what sort of CharTokenizer
you're using. CharTokenizer
is an abstract class that specifies the machinery for tokenizing text strings. It's used by some simpler Analyzers (but not by the StandardAnalyzer
). Lucene comes with two subclasses: a LetterTokenizer
and a WhitespaceTokenizer
. You can create your own that keeps the characters you need and breaks on those you don't by implementing the boolean isTokenChar(char c)
method.