I am using Neo4j community version with 95 GB of single RAM. I am running job to create top clusters and I have ~320 million nodes.
Here is my heap configuration looks like -
server.memory.heap.initial_size=75G
server.memory.heap.max_size=75G
Even after the above config I get heap memory related errors like below. What should be the ideal memory configuration so that we can run the top cluster calculation ?
Error -
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (brdn7979.target.com executor 1): java.io.IOException: org.neo4j.driver.exceptions.ClientException: Failed to invoke procedure `gds.graph.project`: Caused by: java.lang.IllegalStateException: Procedure was blocked since minimum estimated memory (23 GiB) exceeds current free memory (23 GiB).
Even though I have set the memory as 75 GB, its taking max heap as 23 GB.