Spark connector - node failures cause constraint errors

Hi,

We are getting unique constraint errors when loading data into neo4j using the spark adaptor. We are running spark on K8. We think that when a node fails, the spark job will start again on a new node and load the same data again. Has anyone experienced this?

Reena