2
votes

I am using elasticsearch as my search engine, I am now trying to create an custom analyzer to make the field value just lowercase. The following is my code:

Create index and mapping

create index with a custom analyzer named test_lowercase:

curl -XPUT 'localhost:9200/test/' -d '{
  "settings": {
    "analysis": {
      "analyzer": {
        "test_lowercase": {
          "type": "pattern",
          "pattern": "^.*$"
        }
      }
    }
  }
}'

create a mapping using the test_lowercase analyzer for the address field:

curl -XPUT 'localhost:9200/test/_mapping/Users' -d '{
  "Users": {
    "properties": {
      "name": {
        "type": "string"
      },
      "address": {
        "type": "string",
        "analyzer": "test_lowercase"
      }
    }
  }
}'

To verify if the test_lowercase analyzer work:

curl -XGET 'localhost:9200/test/_analyze?analyzer=test_lowercase&pretty' -d '
Beijing China
'
{
  "tokens" : [ {
    "token" : "\nbeijing china\n",
    "start_offset" : 0,
    "end_offset" : 15,
    "type" : "word",
    "position" : 0
  } ]
}

As we can see, the string 'Beijing China' is indexed as a single lowercase-ed whole term 'beijing china', so the test_lowercase analyzer works fine.

To verify if the field 'address' is using the lowercase analyzer:

curl -XGET 'http://localhost:9200/test/_analyze?field=address&pretty' -d '
Beijing China
'
{
  "tokens" : [ {
    "token" : "\nbeijing china\n",
    "start_offset" : 0,
    "end_offset" : 15,
    "type" : "word",
    "position" : 0
  } ]
}
curl -XGET 'http://localhost:9200/test/_analyze?field=name&pretty' -d '
Beijing China
'
{
  "tokens" : [ {
    "token" : "beijing",
    "start_offset" : 1,
    "end_offset" : 8,
    "type" : "<ALPHANUM>",
    "position" : 0
  }, {
    "token" : "china",
    "start_offset" : 9,
    "end_offset" : 14,
    "type" : "<ALPHANUM>",
    "position" : 1
  } ]
}

As we can see, for the same string 'Beijing China', if we use field=address to analyze, it creates a single item 'beijing china', when using field=name, we got two items 'beijing' and 'china', so it seems the field address is using my custom analyzer 'test_lowercase'.

Insert a document to the test index to see if the analyzer works for documents

curl -XPUT 'localhost:9200/test/Users/12345?pretty' -d '{"name": "Jinshui Tang",  "address": "Beijing China"}'

Unfortunately, the document has been successfully inserted but the address field has not been correctly analyzed. I can't search out it by using the wildcard query as follows:

curl -XGET 'http://localhost:9200/test/Users/_search?pretty' -d '
{
  "query": {
    "wildcard": {
      "address": "*beijing ch*"
    }
  }
}'
{
  "took" : 8,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 0,
    "max_score" : null,
    "hits" : [ ]
  }
}

List all terms analyzed for the document:

So I run the following commands to see all terms of the document, and I found that the 'Beijing China' is not in the term vector at all.

curl -XGET 'http://localhost:9200/test/Users/12345/_termvector?fields=*&pretty'
{
  "_index" : "test",
  "_type" : "Users",
  "_id" : "12345",
  "_version" : 3,
  "found" : true,
  "took" : 2,
  "term_vectors" : {
    "name" : {
      "field_statistics" : {
        "sum_doc_freq" : 2,
        "doc_count" : 1,
        "sum_ttf" : 2
      },
      "terms" : {
        "jinshui" : {
          "term_freq" : 1,
          "tokens" : [ {
            "position" : 0,
            "start_offset" : 0,
            "end_offset" : 7
          } ]
        },
        "tang" : {
          "term_freq" : 1,
          "tokens" : [ {
            "position" : 1,
            "start_offset" : 8,
            "end_offset" : 12
          } ]
        }
      }
    }
  }
}

We can see that the name is correctly analyzed and it became two terms 'jinshui' and 'tang', but the address is lost.

Can anyone please help? Is there anything missing?

Thanks a lot!

1

1 Answers

6
votes

To lowercase the text you don't need a pattern. Use something like this:

PUT /test
{
  "settings": {
    "analysis": {
      "analyzer": {
        "test_lowercase": {
          "type": "custom",
          "filter": [
            "lowercase"
          ],
          "tokenizer": "keyword"
        }
      }
    }
  }
}

PUT /test/_mapping/Users
{
  "Users": {
    "properties": {
      "name": {
        "type": "string"
      },
      "address": {
        "type": "string",
        "analyzer": "test_lowercase"
      }
    }
  }
}

PUT /test/Users/12345
{"name": "Jinshui Tang",  "address": "Beijing China"}

And to verify you did the right thing, use this:

GET /test/Users/_search
{
  "fielddata_fields": ["name", "address"]
}

And you will see exactly how Elasticsearch is indexing your data:

        "fields": {
           "name": [
              "jinshui",
              "tang"
           ],
           "address": [
              "beijing",
              "china"
           ]
        }