I'm encountering some problems while trying to save a relatively big graph using Spring Data Neo4j .save() method passing the aggregate root. In the following image you can see an example (the graph in the image is not complete, it's a little larger than that)
Is there any other way to speed up the save?
I tried to save first the nodes at depth 1 or depth 2 using concurrency but I think it won't work.
Are you still encountering this issue? If you are, please create a ticket at https://github.com/neo4j and reply here with your ticket link so others can also track the progress.
If you were able to solve the issue, can you please reply back with your solution so I may mark it as resolved?
Yes, we are still experiencing the same issue. We haven't been able to try another solution like recursive save from lowest level then going up.
Ok, I'm creating an issue. I suppose you meant to link the Spring Data Neo4j repository right?
Thanks for link. As I see no solution. I upgraded to the last spring version and noting changed.
I also have big graph with deeps ~ 5 lvl and a lot of leaves (thousands) . After turned on trace logs I see that there is no spring magic: each node and relationship are storing separately. Looks like I will create custom load function with native query that will bulk insert nodes and after it will bulk insert relationship.