Good evening, I'm new to the comunity, I would like to import the diagram below into neo4j
there is an easy way, i tried apoc and etl but i am stuck to relations (apoc) and etl only imports two tables (regions and provinces). there is an automated way. I'm new to the system.
the problem is that I can't import the data. I would like to understand whether to import single table in csv or complete table? Do relationships need to be created by hand or do I need to run additional scripts?
I would import one of the tables at a time. If you are writing your own import cypher, you will need to create the nodes and the relationships yourself. There is nothing automatic. It looks like you got this concept understood based on the code you pasted.
You will need your csv files to include the equivalent to of primary and foreign keys, so you can link the entries with a relationship. I can help if you provide snippets of the data from each file.
Try this...it was based on my understanding of the data. We can fix it if it doesn't meet your model expectations.
I separated the import into three parts. Run the region and providence import scripts first. The last one links them together, so they must exists already.
load csv with headers from "file:///Regione.csv" as row
merge(n:Regione{id: row.id_regione})
set n.sigla_regione = row.sigla_regione, n.nome_regione = row.nome_regione
load csv with headers from "file:///provincia.csv" as row
merge(n:Provincia {id: row.id_provincia})
set n.provincia = row.provincia, n.sigla = row.sigla
load csv with headers from "file:///comune.csv" as row
match(p:Provincia{id: row.id_provincia})
match(r:Regione{id: row.id_regione})
merge(c:Comune{id: row.id})
set c.istat = row.istat, c.prefisso = row.prefisso, c.cap = row.cap, c.codfisco = row.codfisco, c.link = row.link
merge(c)-[:HAS_REGIONE]->(r)
merge(c)-[:HAS_PROVINCIA]->(p)
BTY, this data seems completely different from the data you were importing in your posted scripts. Anyway, it shows you the general approach to importing data.
Do you have a lot of data that would justify splitting the import into multiple transactions. If so, you can use the call subquery with transactions as such:
load csv with headers from "file:///Regione.csv" as row
call {
with row
merge(n:Regione{id: row.id_regione})
set n.sigla_regione = row.sigla_regione, n.nome_regione = row.nome_regione
} in transactions of 1000 rows