We are trying to load Millions of nodes and relationships into Neo4j. We are currently using below command
USING PERIODIC COMMIT LOAD CSV WITH HEADERS FROM "file:customers.csv" AS row CREATE (:Customer ....
But it is taking us lot of time.
I do see a link which explains modifying the neo4j Files directly. http://blog.xebia.com/combining-neo4j-and-hadoop-part-ii/
But above link seems to be very old. wanted to know if above process is still valid ?
There is a issue in "neo4j-spark-connector" Github link. which is not updated fully.
https://github.com/neo4j-contrib/neo4j-spark-connector/issues/15
What is the best way among those ?