1
votes

I have a very long Cypher request in my app (running on Node.Js and Neo4j 2.0.1), which creates at once about 16 nodes and 307 relationships between them. It is about 50K long.

The high number of relationships is determined by the data model, which I probably want to change later, but nevertheless, if I decide to keep everything as it is, two questions:

1) What would be the maximum size of each single Cypher request I send to Neo4J?

2) What would be the best strategy to deal with a request that is too long? Split it into the smaller ones and then batch them in a transaction? I wouldn't like to do that because in this case I lose the consistency that I had resulting from a combination of MERGE and CREATE commands (the request automatically recognized some nodes that did not exist yet, create them, and then I could make relations between them using their indices that I already got through the MERGE).

Thank you!

1
For query string size there is this question already asked on SO: stackoverflow.com/questions/22259802/… - jjaderberg
And based on that answer, your 50k should be fine. - jjaderberg
@jjaderberg ok thanks, so when does it make sense to split them up? - Aerodynamika

1 Answers

0
votes

I usually recommend to

  1. Use smaller statements, so that the query plan cache can kick in and execute your query immediately without compiling, for this you also need

  2. parameters, e.g. {context} or {user}

I think a statement size of up to 10-15 elements is easy to handle.

You can still execute all of them in a single tx with the transactional cypher endpoint, which allows batching of statements and their parameters.