Yes, there is a way to do this, using multi_fields
.
In Elasticsearch 5.0 onwards, string
field types were split out into two separate types, text
field types that are analyzed and can be used for search, and keyword
field types that are not analyzed and are suited to use for sorting, aggregations and exact value matches.
With dynamic mapping in Elasticsearch 5.0 (i.e. let Elasticsearch infer the type that a document property should be mapped to), a json string
property is mapped to a text
field type, with a sub-field of "keyword" that is mapped as a keyword
field type and the setting ignore_above:256
.
With NEST 5.x automapping, a string
property on your POCO will be automapped in the same way as dynamic mapping in Elasticsearch maps it as per above e.g. given the following document
public class Document
{
public string Property { get; set; }
}
automapping it
var pool = new SingleNodeConnectionPool(new Uri("http://localhost:9200"));
var defaultIndex = "default-index";
var connectionSettings = new ConnectionSettings(pool)
.DefaultIndex(defaultIndex);
var client = new ElasticClient(connectionSettings);
client.CreateIndex(defaultIndex, c => c
.Mappings(m => m
.Map<Document>(mm => mm
.AutoMap()
)
)
);
produces
{
"mappings": {
"document": {
"properties": {
"property": {
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
},
"type": "text"
}
}
}
}
}
You can now use property
for sorting using Field(f => f.Property.Suffix("keyword")
. Take a look at Field Inference for more examples.
keyword
field types have doc_values
enabled by default, which means that a columnar data structure is built at index time and this is what provides efficient sorting and aggregations.
To add a custom analyzer at index creation time, we can automap as before, but then provide overrides for fields that we want to control the mapping for with .Properties()
client.CreateIndex(defaultIndex, c => c
.Settings(s => s
.Analysis(a => a
.Analyzers(aa => aa
.Custom("lowercase_analyzer", ca => ca
.Tokenizer("keyword")
.Filters(
"standard",
"lowercase",
"trim"
)
)
)
)
)
.Mappings(m => m
.Map<Document>(mm => mm
.AutoMap()
.Properties(p => p
.Text(t => t
.Name(n => n.Property)
.Analyzer("lowercase_analyzer")
.Fields(f => f
.Keyword(k => k
.Name("keyword")
.IgnoreAbove(256)
)
)
)
)
)
)
);
which produces
{
"settings": {
"analysis": {
"analyzer": {
"lowercase_analyzer": {
"type": "custom",
"filter": [
"standard",
"lowercase",
"trim"
],
"tokenizer": "keyword"
}
}
}
},
"mappings": {
"document": {
"properties": {
"property": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
},
"analyzer": "lowercase_analyzer"
}
}
}
}
}