cancel
Showing results for 
Search instead for 
Did you mean: 

Join the community at Nodes 2022, our free virtual event on November 16 - 17.

Large Bulk Writes - Best Practice - Avoid DeadLock Exception

gidon
Node

So we just started using neo4j to save Twitter data on a large scale.
Our issue is that when bulk saving the data on a multi-threaded writer service we began receiving a lot of deadlock exceptions, so many that catching the exception and retrying is not an option.
We can not avoid bulk writing the same nodes or relations since we have multiple data scanners that can sometimes deal with the same twitter data.

For instance, the same twitter user or tweet can be extracted by multiple scanners and get sent to our multithreaded writing queue. The user, tweet or relations are currently being processes by an existing bulk write and then we get the deadlock exception.

Since we are dealing with large scale data extraction we need to bulk write and can not save one node or relation per write. Moreover, we need to auto scale the writing service according to the amount of data that is being scanned at the moment and this will increase deadlock exceptions.

Before every bulk write we remove duplicates but this does not ensure that the next bulks will not contain the same user, tweet or relation.
Even if it did we would still encounter situations that one bulk may be saving a relation like,
user_1 -[ :FOLLOWS] -> user_2
and another bulk would contain the same user,
user_2 -[ :TWEETS] -> tweet_1
resulting in a deadlock exception

We also tried sorting the nodes by our id field before actually bulk writing to the DB, thinking that it might decrease deadlocks but it did not help.

We are using NeoModel OGM in python.
Our neo4j cluster is on azure.

3 REPLIES 3

mdfrenchman
Graph Voyager

When I saw "Best Practice" I was hoping you were sharing a solution. We also deal with this regularly. Our way around it is to not have multiple writes touching the same space running at the same time. This works for the most part.

One thing that always sticks in my mind is graphs are a performance boost to reading data, but writing data has always been the weak spot.

I'd be interested to see any other work arounds or mitigations to this so I'll follow along.

mdfrenchman
Graph Voyager

@gidon you could check out APOC. Now that I think of it, there might be a parallel import function in there. I'm not able to use APOC at work so I can't say for certain but would be a place to check.

Thank you. I'll update if I come across something useful.