'ExclusiveLock errors' when writing to Neo4j with Spark

Two different teams have run into a mysterious 'locking' errors when trying to write a graph into Neo4j with PySpark:

Caused by: org.apache.spark.SparkException: 
Job aborted due to stage failure: Task 15 in stage 13.0 failed 1 times, 
most recent failure: 
Lost task 15.0 in stage 13.0 (TID 1033, localhost, executor driver): 
org.neo4j.driver.exceptions.TransientException: 
ForsetiClient[1] can't acquire ExclusiveLock{owner=ForsetiClient[2]} on NODE(243),
 because holders of that lock are waiting for ForsetiClient[1].

We think it might have something to do with unique node properties, and we've messed around with constraints without much luck so far.

Any ideas?