So we just started using neo4j to save Twitter data on a large scale.
Our issue is that when bulk saving the data on a multi-threaded writer service we began receiving a lot of deadlock exceptions, so many that catching the exception and retrying is not an option.
We can not avoid bulk writing the same nodes or relations since we have multiple data scanners that can sometimes deal with the same twitter data.
For instance, the same twitter user or tweet can be extracted by multiple scanners and get sent to our multithreaded writing queue. The user, tweet or relations are currently being processes by an existing bulk write and then we get the deadlock exception.
Since we are dealing with large scale data extraction we need to bulk write and can not save one node or relation per write. Moreover, we need to auto scale the writing service according to the amount of data that is being scanned at the moment and this will increase deadlock exceptions.
Before every bulk write we remove duplicates but this does not ensure that the next bulks will not contain the same user, tweet or relation.
Even if it did we would still encounter situations that one bulk may be saving a relation like,
user_1 -[ :FOLLOWS] -> user_2
and another bulk would contain the same user,
user_2 -[ :TWEETS] -> tweet_1
resulting in a deadlock exception
We also tried sorting the nodes by our id field before actually bulk writing to the DB, thinking that it might decrease deadlocks but it did not help.
We are using NeoModel OGM in python.
Our neo4j cluster is on azure.