What's the ideal Neo4j memory configuration to avoid java.lang.outofmemoryerror java heap space error?

Hello! I am a newbie to Neo4j. I am using apoc.periodic.iterate to create a new relationship between the existing nodes when a certain condition is satisfied, with {batchSize=100000, parallel:false}. This throws me an error java.lang.OutofMemoryError: java heap space.

My graph contains 80M nodes, 105M properties and has a size of 8.1 GB and I am running neo4j on docker that runs on Linux server with 512GB ram. I have used the following memory configuration

dbms.memory.heap.initial_size=31g
dbms.memory.heap.max_size=31g
dbms.memory.pagecache.size=63g

dbms.memory.transaction.global_max_size=96g
dbms.tx_state.memory_allocation= ON_HEAP

Please suggest me the best memory configuration if I want to restrict my dbms.memory.transaction.global_max_size to 96 GB ( which can me increased if needed though). neo4j-admin memrec suggests me not to allocate more than 31 GB to heap.

Thank you in advance.

It is not likely to be an issue of memory, but an issue of a query that needs tuning (it likely isn't doing what you think it's doing).

31gb is usually the max recommended heap memory configuration (as past that we lose our compressed object pointers), and most queries (especially batch queries via that procedure) shouldn't need that much by themselves.

Please provide the full query and we can work with you to help tune it. If it's a sensitive query, reach out to us on the Neo4j users slack and we can work with you via direct messages.

@nichopriyatham97

if this is so, i.e. the size of the entire database consumes 8.1GB on the filesystem then there really is no need to define

dbms.memory.pagecache.size

to be any more than 8.1G as this parameter represents the amount of RAM to use so as to record the graph in RAM. And so if the databases is 8.1G then setting dbms.memory.pagecache.size to 20G or 40G or 60G provides no more benefit than simply defining to 8.1G (ok set it to 10G to allow for growth of the database)