Hi Team,
I am trying to load the data from csv having 3 columns ename , ip , circle. ename is my merge condition but i have also define unique property constraint on ip column.
While loading the data , I am getting below error:
Neo.ClientError.Schema.ConstraintValidationFailed: Node(140340) already exists with label qwerty
and property ip
= '10.1.1.3'
CSV file:
ename,ip,circle
rrh001,10.1.1.1,E
rrh002,10.1.1.2,W
rrh003,10.1.1.3,W
eer004,10.1.1.3,S
eer05,11.1.2.4,N
Query used:
load csv with headers from "file:///super.csv" as row with row
MERGE (a:qwerty {ename:row.ename})
ON CREATE
SET a.ip=row.ip,
a.circle = row.cricle
After running above query , No record is loaded in label.
Expectation is : That records which are not violating the contraints should be loaded.
Please share your knowledge.
Regards
Akshat
there is a not a option in LOAD CSV to skip 'bad rows'.
since you have not specified a periodic commit ( LOAD CSV - Cypher Manual ) and to which its trying to commit the entire CSV in 1 transaction any cypher error is going to reject the entire CSV.
You could set using periodic commit 1
which would commit after each row but this may lead to performance reasons. Also my expectation is it would commit all rows up until the failure. So you would then need to modify the csv file and remove the offending row and return the LOAD CSV. Ideally you might want to create a new CSV with just the rows after the failure and then LOAD CSV using the new file.
Or drop the constraint prior to LOAD CSV, which will thus allow for the data to be loaded but then you will have 2 nodes with the same IP. Re-enabling the constraint will error as above indicating there are more than 1 nodes with the same IP at which point you could then manually delete 1 of the nodes and then re-enable the constraint. However if your LOAD CSV for example results in 100 nodes each with a pair of duplicate IPs (in this example) this could be a tedious process.
1 Like
Hello Dane,
Thanks a lot for your quick reply.
Is there an option to do so if we load the data from Oracle DB? Please confirm.
Manual intervention of removing bad records is not possible as data count is in 40 to 50 lacs.
Best Regards
Akshat
Is there an option to do so if we load the data from Oracle DB?
Not that I am aware of. Whether you load the data via LOAD CSV
or a apoc.load.jdbc I dont see an option today to skip bad rows.
Hi Dane,
Okay!!
Is it possible to take this as a development request for future release or Any hotfix can be given in upcoming days for this ask?
Thanks in Advance!!
Best Regards
Akshat