Hello, I have raised this problem on discord, hopefully, it will get some more attention here.
Period (id, year, startDate, endDate, type)
Indices are on all fields. startDate and endDate are of type Date
Indices are on all fields
Commissionable (effectiveDate, value, type)
Indices are on effectiveDate (of type Date) and type
I need to calculate and create Commission|Earning (id, type, value) nodes and connect them with Period and Client:
I have 67 ambassadors only (it will be millions of them ) and it takes 9s.
I am also trying to find a way how to make it faster.
Now, since the hierarchy can be configurable (as mentioned before) and can be, e,g. 3 or 4, instead of 2, I would like to use apoc.path.expand, since variable-length relationship cannot be paralelised.
This one takes 14 min!?
Profile is here:
I profiled neo4j process with visualVM and although my heap sizes are 1g min and 4g max, the memory is never above 750mb. CPU stays reasonably low. My env: OSX 11.2.3, JDK 11, neo4j community 4.2.4, single instance, apoc-188.8.131.52
What could be the problem, why apoc-path-expand is so much slower than variable-length relationship query.
And finally, since in queries I have fragments that should come from the configuration. I have tried to isolate them in cypher queries and call them with apoc.cypher.run
This takes 14s, much longer than 1st query without apoc.cypher.run.
What could be the reason?
I'll need to take a closer look later, but this isn't directly due to the APOC procs, but of how the planner is planning the rest of the query.
Remember that most Cypher operations execute per row, so the more rows there are, the more work is needed.
In your first query plan, rows hit a max of 90128 rows and the mode is around 15k rows. DB hits spike at 18552115 db hits on the optional expansion.
In the apoc.cypher.run query plan, rows also spike early at 90128 rows, and the mode is at about 64k rows, with a db hit spike of 18560302 which is about the same as the last query.
The path expander plan is the worst, with a spike of 528401520 rows and two consecutive db hit spikes of around 528427384. This is due to a label scan and a hash join midway through the query.
I'll look at this in more depth later on, but understand, these rows and spikes are not directly because of the APOC calls, but around how the query was planned around them. There may be ways to optimize to deal with some of the bad planner decisions here.
It's not a guaranteed thing, since it depends entirely on the rest of the query. It's always a good idea to recheck your plan if there are drastic changes to a query (APOC or not) and if testing reveals major timing differences.
I'll try to take a closer look later and see where we can tune this one.