6
votes

My task is to import a number of friend or follow relationships into a simple neo4j graph db. My input data is a csv with the following structure:

owner,friend,type
Bob,Charlie,friend
Alice,Chris,follower

In the above example Charlie is a friend of Bob and Chris is a follower of Alice. I want to bulk import these into neo4j using LOAD CSV but I'm having trouble creating the conditional relationships during import. The import code looks something like:

LOAD CSV WITH HEADERS FROM "file:./graph.csv" AS csvLine
WITH csvLine.owner AS owner,
     csvLine.friend AS friend,
     csvLine.type AS Type

MERGE (o:Person { name: owner })
MERGE (c:Person { name: friend })
MERGE (u)<-[:IS_FRIEND {type: Type}]-(c);

I'd rather have two types of relationships IS_FRIEND and FOLLOWS. But when I try conditional statements like:

CASE WHEN Type == "friend" THEN MERGE (u)<-[:IS_FRIEND]-(c) ELSE (u)<-[:FOLLOWS]-(c);

I receive syntax errors on the use of CASE

Is there a way to make conditional relationships during bulk import from csv like this?

2

2 Answers

0
votes

Strange this was not answered. YOu can look at https://markhneedham.com/blog/2014/06/17/neo4j-load-csv-handling-conditionals/

// cards
FOREACH(n IN (CASE WHEN csvLine.type IN ["yellow", "red", "yellowred"] THEN [1] else [] END) |
  FOREACH(t IN CASE WHEN team = home THEN [home] ELSE [away] END |
    MERGE (stats)-[:RECEIVED_CARD]->(card {time: csvLine.time, type: csvLine.type})
  )     
)
0
votes

If you have access to APOC Procedures, you can use conditional procedures to handle this case, it can be a bit more readable than the FOREACH trick.

MERGE (o:Person { name: owner })
MERGE (c:Person { name: friend })
WITH o, c, Type
CALL apoc.do.when(Type = 'friend', "MERGE (o)<-[r:IS_FRIEND]-(c) RETURN r", "MERGE (o)<-[r:FOLLOWS]-(c) RETURN r", {o:o, c:c}) YIELD value
SET value.r.type = Type

The last SET line is there primarily because you can't end a query with a CALLed procedure, so we need some kind of SET operation. Feel free to replace with anything else that makes sense for your data.