Scenario:
I have loaded some nodes into a DB, such as:
node_1, node_2, node_3...
node_a, node_b, node_c...
I have a csv file, like:
x, node_1, node_2, node_3
node_a, 1, 2, 3
node_b, 4, 5, 6
node_c, 7, 8, 9
Requirement:
I want to batch-import relationships using like “load csv”, I need to load these relationships:
(:node_a)-[:x{value: 1}]->(:node_1)
(:node_a)-[:x{value: 2}]->(:node_2)
(:node_a)-[:x{value: 3}]->(:node_3)
(:node_b)-[:x{value: 4}]->(:node_1)
(:node_b)-[:x{value: 5}]->(:node_2)
(:node_b)-[:x{value: 6}]->(:node_3)
…
I don't know how to iterate columns in "load csv". Please let me know if you have any suggestions. Thank you so much.
@lei.yan
To simplify considerably the process, I suggest you to use the APOC Library if possible,
expecially to use apoc.cypher.doIt to concatenate strings with variables.
So that:
// this 1st row can be substitute with LOAD CSV WITH HEADERS FROM....
CALL apoc.load.csv("test.csv") yield map with map
unwind apoc.coll.remove(keys(map), 0) as key // to cycle without 1st column, so [1, 2, 3], [4, 5, 6], [7, 8, 9] in the example csv
call apoc.cypher.doIt( // here I create queries like match (start:node_a), (end:node_1) with start,end merge (start)-[:x {value: 1}]->(end)
"match (start:" + map['x'] +"), (end: " + key + ") with start,end merge (start)-[:x {value: $value}]->(end)",
{value: map[key]})
yield value
return *
Neo4j is awesome, Cypher and APOC are awesome. There are many nice solutions for one issue.
I figured out alternative option to match my scenario, something like this:
CALL apoc.load.csv('test.tsv', {header:true, sep:'\t'})
YIELD map, list
CALL {
WITH map, list
UNWIND keys(map)[1..] AS key
WITH *, toInteger(map[key]) AS value
MATCH (g:Gene{symbol: list[0]})
MATCH (c:Cell{name: key})
WHERE 0<value
MERGE (g)-[x:x{value: value}]->(c)
} IN TRANSACTIONS OF 1000 ROWS;
Your solution is super cool, and opened up my brain. Thanks again.