1
votes

I have big graph that include millions of nodes and relations. I need to find all possible relations between group of nodes with <= 5 hoops. Example : GroupA { Node1, Node2, Node3, .. Node100} Group of nodes with 100 nodes.

Now i want to find all possible relations between all this nodes.

When I run Cypher Query that include 100 nodes everything ok, But when i run it with 101 nodes i got TimeOut. (All request are REST)

 {
        "query": "start s = node(114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214), d = node(114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214)  match p = s -[r?*0..5]-> d return p  ",
        "params": {}
    }

Someone can explain me what happens ? Bad Query ??

UPDATE: Another Issue Discovered: When i run 200 Cypher Queries in loop, server stops to respond after query number 100

Something like that 
for (i=0; i<200; i++)
{
  query = "start s = node(" + sourceNodeId + "), d = node(" + destinationNodeid + ")  match p = s -[r?*0.." + deep + "]-> d return p"
   RunCypherQuery(query);
}

What can cause for this strange behavior ? 10x

UPDATE:

I performed memory tweak and increased JavaMemory Min And Max to 4G instead of 4M and 64M Result is Exception :

Error 500 GC overhead limit exceeded
HTTP ERROR 500
Problem accessing /db/data/cypher. 
Reason:
GC overhead limit exceeded</pre></p><h3>Caused by:</h3><pre>java.lang.OutOfMemoryError: GC overhead limit exceeded
    at scala.collection.JavaConversions$.mapAsScalaMap(JavaConversions.scala:488)
    at scala.collection.JavaConverters$$anonfun$mapAsScalaMapConverter$1.apply(JavaConverters.scala:441)
    at scala.collection.JavaConverters$$anonfun$mapAsScalaMapConverter$1.apply(JavaConverters.scala:441)
    at scala.collection.JavaConverters$AsScala.asScala(JavaConverters.scala:80)
    at org.neo4j.cypher.internal.pipes.MutableMaps$.create(Pipe.scala:60)
    at org.neo4j.cypher.internal.pipes.ExecutionContext.newWith(Pipe.scala:136)
    at org.neo4j.cypher.internal.pipes.matching.AddedHistory.toMap(History.scala:75)
    at org.neo4j.cypher.internal.pipes.matching.PatternMatcher.isMatchSoFar(PatternMatcher.scala:166)
    at org.neo4j.cypher.internal.pipes.matching.PatternMatcher.org$neo4j$cypher$internal$pipes$matching$PatternMatcher$$traverseNextNodeFromRelationship(PatternMatcher.scala:98)
    at org.neo4j.cypher.internal.pipes.matching.PatternMatcher$$anonfun$4.apply(PatternMatcher.scala:150)
    at org.neo4j.cypher.internal.pipes.matching.PatternMatcher$$anonfun$4.apply(PatternMatcher.scala:150)
    at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:175)
    at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:175)
    at scala.collection.immutable.Stream$Cons.tail(Stream.scala:634)
    at scala.collection.immutable.Stream$Cons.tail(Stream.scala:626)
    at scala.collection.immutable.Stream.foldLeft(Stream.scala:302)
    at org.neo4j.cypher.internal.pipes.matching.PatternMatcher.traverseRelationship(PatternMatcher.scala:150)
    at org.neo4j.cypher.internal.pipes.matching.PatternMatcher.traverseNextSpecificNode(PatternMatcher.scala:61)
    at org.neo4j.cypher.internal.pipes.matching.PatternMatcher.traverseNode(PatternMatcher.scala:72)
    at org.neo4j.cypher.internal.pipes.matching.PatternMatcher.traverseNextNodeOrYield(PatternMatcher.scala:177)
    at org.neo4j.cypher.internal.pipes.matching.PatternMatcher.traverseNextSpecificNode(PatternMatcher.scala:60)
    at org.neo4j.cypher.internal.pipes.matching.PatternMatcher.traverseNode(PatternMatcher.scala:72)
    at org.neo4j.cypher.internal.pipes.matching.PatternMatcher.foreach(PatternMatcher.scala:36)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:194)
    at org.neo4j.cypher.internal.pipes.matching.PatternMatcher.map(PatternMatcher.scala:28)
    at org.neo4j.cypher.internal.pipes.matching.PatterMatchingBuilder.org$neo4j$cypher$internal$pipes$matching$PatterMatchingBuilder$$createPatternMatcher(PatterMatchingBuilder.scala:90)
    at org.neo4j.cypher.internal.pipes.matching.PatterMatchingBuilder$$anonfun$getMatches$1.apply(PatterMatchingBuilder.scala:47)
    at org.neo4j.cypher.internal.pipes.matching.PatterMatchingBuilder$$anonfun$getMatches$1.apply(PatterMatchingBuilder.scala:47)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:200)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:200)
    at org.neo4j.cypher.internal.pipes.matching.PatternMatcher.yieldThis(PatternMatcher.scala:185)
    at org.neo4j.cypher.internal.pipes.matching.PatternMatcher.traverseNextNodeOrYield(PatternMatcher.scala:175)
2
I performed Upgrade to 1.9. Then i removed memory tweak to default values. In my last try i succeeded perform query between 2 groups A and B when Size of A is 100 nodes and Size of B 1000 Nodes. This query run for about 69042 millisecond => 1 Min 9 sec. The Question still exist : Why i can't perform query with source group larger then 100 nodes.Zaber

2 Answers

1
votes

I think your question might be a little unclear. You want to return a list of all pairs that are connected? If so, here's a Cypher query that will return all pairs:

START a=node(*) 
MATCH p=a-[*1..5]->b 
RETURN distinct p
-1
votes

I concur with Nicholas that you should try using Cypher, which you can send via REST API as well (see here).

Try

START n=node(1,2,3), m=node(1,2,3)
MATCH n-[?*..4]->()-[r]->m
RETURN DISTINCT r;

of course replacing (1,2,3) with your long list of 100 node IDs! That's what you're looking for, if I undertood you correctly!