We have a json file with more than 100K relationships data, We are trying to create the relationship b/w existing nodes with below query. It's taking more than 3.5 minute. In that file we have multiple type of labels. Is there any other optimized solution we can use here ???
QUERY :
CALL apoc.periodic.iterate("CALL apoc.load.json('file:///data.json') YIELD value UNWIND value.relationships AS row RETURN row",
"WITH row, apoc.text.join(['MATCH(startNode:', row.start.labels[0], '{Id:',\"'\", row.start.Id, \"'}) RETURN startNode\"], '') as startNodeQuery
WITH row, startNodeQuery, apoc.text.join(['MATCH(endNode:', row.end.labels[0], '{Id:',\"'\", row.end.Id, \"'}) RETURN endNode\"], '') as endNodeQuery
CALL apoc.cypher.run(startNodeQuery, {})
YIELD value AS startNodeRetValue
CALL apoc.cypher.run(endNodeQuery, {})
YIELD value AS endNodeRetValue
CALL apoc.merge.relationship(startNode, row.label, row.properties, {}, endNode) YIELD rel
RETURN count(rel)",
{batchSize:10000, parallel:false})
File Data Example :
{
"relationships" : [
{
"label" : "hasProduct",
"properties" : null,
"start" : {
"labels" : [ "Merchant" ],
"id" : "2129adda-21bf-44ee-85b7-84bda819fcf3"
},
"end" : {
"labels" : [ "Product" ],
"id" : "81a66784-b73b-4227-8c10-e9dbbb82b120"
}
},
{
"label" : "producedBy",
"properties" : null,
"start" : {
"labels" : [ "Company" ],
"id" : "5712b1ed-16b1-40c1-baac-d2224cda71e2"
},
"end" : {
"labels" : [ "Product" ],
"id" : "81a66784-b73b-4227-8c10-e9dbbb82b120"
}
}
]
}