I created a Neo4j 3 database that includes some test data and also a small application that will send http cypher requests to Neo4j. These requests are always of the same time. Acutally its a query template that just differs by some attributes. I am interested in the performance of these statements.
I know that I can use the PROFILE
to get some information in the browser. But I want to execute a set of statements, e. g. 10 example queries, several times and calculate the average performance. Is there an easy way or a tool to do this or do I have to write e. g. a Python script that collects these values? It does not have to be a big application, I just want to see some general performance metrics.
1 Answers
I don't think there is an out-of-the-box tool for benchmarking Neo4j yet. So your best option is to implement your own solution - but you have to be careful if you want to get results that are (to some degree) representative:
Check the docs on performance.
Give the Neo4j JVM sufficient time to warmup. This means that you'll want to run a warmup phase with the queries and discard the execution times of them.
Instead of using a client-server architecture, you can also opt to use Neo4j in embedded mode, which will give you a better idea of the query performance (without the overhead of the driver and the serialization/deserialization process). However, in this case you have to implement benchmark over the JVM (in Java or possibly Jython).
Run each query multiple times. Do not use the average as it is more sensitive to outlier values (you can get high values for a number of reasons, e.g. if the OS scheduler starts some job in the background during a particular query execution).
A good paper in the topic, How not to lie with statistics: the correct way to summarize benchmark results, argues that you should use the geometric mean.
It is also common practice in performance experiments in computer science papers to use the median value. I tend to use this option - e.g. this figure shows the execution times of two simple SPARQL queries on in-memory RDF engines (Jena and Sesame), for their first executions and the median values of 5 consecutive executions.
Note however, that Neo4j employs various caching mechanisms, so if you only run the same query multiple times, it will only need to compute the results on the first execution and following executions will use the cache - unless the database is updated between the query executions.
As a good approximation, you can design the benchmark to resemble your actual workload as closely as possible - in many cases, application-specific macrobenchmarks make more sense than microbenchmarks. So if each query will only be evaluated once by the application, it is perfectly acceptable to benchmark only the first evaluation.
(Bonus.) Another good read in the topic is The Benchmark Handbook - chapter 1 discusses the most important criteria for domain-specific benchmarks (relevance, portability, scalability and simplicity). These are probably not required for your benchmark but these are nice to now.
I worked on a cross-technology benchmark considering relational, graph and semantic databases, including Neo4j. You might find some useful ideas or code snippets in the repository: https://github.com/FTSRG/trainbenchmark