3
votes

Summary

We need to increase percolator performance (throughput).

Most likely approach is scaling out to multiple servers.

Questions

How to do scaling out right?

1) Would increasing number of shards in underlying index allow running more percolate requests in parallel?

2) How much memory does ElasticSearch server need if it does percolation only?

Is it better to have 2 servers with 4GB RAM or one server with 16GB RAM?

3) Would having SSD meaningfully help percolator's performance, or it is better to increase RAM and/or number of nodes?

Our current situation

We have 200,000 queries (job search alerts) in our job index. We are able to run 4 parallel queues that call percolator. Every query is able to percolate batch of 50 jobs in about 35 seconds, so we can percolate about:

4 queues * 50 jobs per batch / 35 seconds * 60 seconds in minute = 343 jobs per minute

We need more.

Our jobs index have 4 shards and we are using .percolator sitting on top of that jobs index.

Hardware: 2 processors server with 32 cores total. 32GB RAM. We allocated 8GB RAM to ElasticSearch.

When percolator is working, 4 percolation queues I mentioned above consume about 50% of CPU.

When we tried to increase number of parallel percolation queues from 4 to 6, CPU utilization jumped to 75%+. What is worse, percolator started to fail with NoShardAvailableActionException:

[2015-03-04 09:46:22,221][DEBUG][action.percolate ] [Cletus Kasady] [jobs][3] Shard multi percolate failure org.elasticsearch.action.NoShardAvailableActionException: [jobs][3] null

That error seems to suggest that we should increase number of shards and eventually add dedicated ElasticSearch server (+ later increase number of nodes).

Related: How to Optimize elasticsearch percolator index Memory Performance

1

1 Answers

4
votes

Answers

How to do scaling out right?

Q: 1) Would increasing number of shards in underlying index allow running more percolate requests in parallel?

A: No. Sharding is only really useful when creating a cluster. Additional shards on a single instance may in fact worsen performance. In general the number of shards should equal the number of nodes for optimal performance.

Q: 2) How much memory does ElasticSearch server need if it does percolation only?

Is it better to have 2 servers with 4GB RAM or one server with 16GB RAM?

A: Percolator Indices reside entirely in memory so the answer is A LOT. It is entirely dependent on the size of your index. In my experience 200 000 searches would require a 50MB index. In memory this index would occupy around 500MB of heap memory. Therefore 4 GB RAM should be enough if this is all you're running. I would suggest more nodes in your case. However as the size of your index grows, you will need to add RAM.

Q: 3) Would having SSD meaningfully help percolator's performance, or it is better to increase RAM and/or number of nodes?

A: I doubt it. As I said before percolators reside in memory so disk performance isn't much of a bottleneck.

EDIT: Don't take my word on those memory estimates. Check out the site plugins on the main ES site. I found Big Desk particularly helpful for watching performance counters for scaling and planning purposes. This should give you more valuable info on estimating your specific requirements.

EDIT in response to comment from @DennisGorelik below:

I got those numbers purely from observation but on reflection they make sense.

  1. 200K Queries to 50MB on disk: This ratio means the average query occupies 250 bytes when serialized to disk.
  2. 50MB index to 500MB on heap: Rather than serialized objects on disk we are dealing with in memory Java objects. Think about deserializing XML (or any data format really) you generally get 10x larger in-memory objects.