I use apoc.periodic.iterate function:
CALL apoc.periodic.iterate(
"load csv from 'file:///user_friends.csv' as line with toInteger(line[1]) as start_id, toInteger(line[0]) as end_id match (a:User{id:start_id}),(b:User{id:end_id}) return a,b ",
"where a is not null and b is not null merge (a)-[f:FOLLOWS]->(b) on create set f.update_date=date('2019-06-10') on match set f.update_date=date('2019-06-10')",
{batchSize:20000, parallel:true, concurrency:20, iterateList:true})

I run this query two time and get two errors:
errorMessages column was empty
batch column:
"total": 307,
"committed": 76,
"failed": 231,
"errors": {
"java.lang.NullPointerException": 231
operations column:
"total": 3064913,
"committed": 760000,
"failed": 2310000,
"errors": {


column column:
"total": 154,
"committed": 21,
"failed": 133,
"errors": {
"org.neo4j.kernel.DeadlockDetectedException: ForsetiClient[2] can't acquire ExclusiveLock{owner=ForsetiClient[18]} on RELATIONSHIP(6416563251), because holders of that lock are waiting for ForsetiClient[2].
Wait list:ExclusiveLock[
Client[18] waits for [2,13]]": 1,
"java.lang.NullPointerException": 132
operations column:
"total": 3064913,
"committed": 420000,
"failed": 2640000,
"errors": {


what means these errors? how can I change my query to avoid errors?

Don't use parallel in this particular case.

From the documentation there is a warning:

Please note that the parallel operation only works well for non-conflicting updates otherwise you might run into deadlocks.

When relationships are merged, locks are taken on the start and end nodes of the relationship. If the same nodes are used (start or end) multiple times throughout the CSV, then during the parallel batching there is a chance for them to conflict and a deadlock to occur.

Parallel batching is best for nodes only, we wouldn't recommend it for relationships unless you are sure the nodes involved only occur once.

1 Like

I don't update the nodes!
I used different relations in csv and have no duplicate on relations.
but I have some start or end nodes(without duplicate both start and end) that multiple times appeared.for example:

start_node, end_node
1 , 2
1 , 3
1 , 4
2 , 1
2 , 5
4 , 3

I don't understand! why deadlock occurred?
in my csv file, has just duplicate end nodes. is it caused deadlock?

As I said above:

When relationships are merged, locks are taken on the start and end nodes of the relationship.

In your query, you MERGE a relationship. When this happens, locks are taken on the start and end nodes.

If, in another batch, one of those nodes is a start or end node of a different relationship being merged, there will be lock contention. If between at least two different parallel batches, each has locks on nodes the others need, there is a deadlock, and one or the other must roll back its transaction.

To avoid this, do not use parallel:true when using periodic iterate to create or merge or delete relationships.

1 Like

thanks. it is very bad for neo4j that don't have any solution for this state!:face_with_raised_eyebrow:

There really is no good way around it in any db to my knowledge, at least not without sacrificing something important.

Please consider, locks are necessary for relationship creation/deletion because this is required for consistency (if we didn't have locks, then you could get into a state where relationships could be in an inconsistent state, either with one node or the other not in sync with the relationships on it). We can't sacrifice this or it embeds eventual inconsistency into the db design.

So locks are required. But you can't have both that and parallel batching without risking lock contention and deadlock, unless you can guarantee that nodes involved don't occur multiple times. The best you can do is build in a retry mechanism, but this isn't foolproof. You can add retries into periodic iterate if you like, I forgot about those.

I don't see what is your reason clearly!

First, do you understand deadlocks?

If you understand this, and you understand that Neo4j takes locks on nodes when creating or deleting relationships, and if you understand that the locks taken during parallel batching can interfere with each other, then you should be able to understand what is happening and why.

Also, if you do a google search on "oracle db deadlock" or similar, you'll see that deadlock situations are not unique to Neo4j, you'll see this possibility (and similar solutions, rolling back one of the transactions) in any ACID database when you're dealing with parallel updates being executed.