With Elasticsearch, I would like to specify a search analyzer where the first 4 characters and last 4 characters are tokenized.
For example: supercalifragilisticexpialidocious => ["supe", "ious"]
I have had a go with an ngram as follows
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "ngram",
"min_gram": 4,
"max_gram": 4
}
}
}
}
}
I am testing the analyzer as follows
POST my_index/_analyze
{
"analyzer": "my_analyzer",
"text": "supercalifragilisticexpialidocious."
}
And get back `super' ... loads of stuff I don't want and 'cious'. The problem for me is how can I take only the first and last results from the ngram tokenizer specified above?
{
"tokens": [
{
"token": "supe",
"start_offset": 0,
"end_offset": 4,
"type": "word",
"position": 0
},
{
"token": "uper",
"start_offset": 1,
"end_offset": 5,
"type": "word",
"position": 1
},
...
{
"token": "ciou",
"start_offset": 29,
"end_offset": 33,
"type": "word",
"position": 29
},
{
"token": "ious",
"start_offset": 30,
"end_offset": 34,
"type": "word",
"position": 30
},
{
"token": "ous.",
"start_offset": 31,
"end_offset": 35,
"type": "word",
"position": 31
}
]
}