Hi All,
I am using the Neo4j enterprise version in Amazon AWS as a t2.xlarge instance size with 16GB RAM and 4 vCpu as a single node VM.
However in every day or two, my heap memory gets exhausted, I even ran neo4j-admin memrec --memory=graph.db value and assigned the heap memory and page caching in accordance with the result received from the above command.
I am recieving Neo4j Crashes due to 'java.lang.OutOfMemoryError: Java heap space'.
After investing further I enabled the query logging on the server where I customized a neo4j.template a bit as
dbms.memory.heap.initial_size=5g
dbms.memory.heap.max_size=5g
dbms.memory.pagecache.size=7g
dbms.logs.query.enabled=true
dbms.logs.query.parameter_logging_enabled=true
dbms.logs.query.time_logging_enabled=true
dbms.logs.query.allocation_logging_enabled=false
dbms.logs.query.page_logging_enabled=false
dbms.track_query_allocation=true
cypher.query_max_allocations.size=1G
I have enabled this parameter cypher.query_max_allocations.size=1G , so will it restrict the maximum allocation size for each query?
If not, is there any other way to limit the maximum size allocated to each query?
Currently, we are having 267717 nodes, 13428 relationships, and 395 properties, which might be 10x times more in near future.
I am struck with this error for a while, every time I have to take an AWS backup and restore it in a new VM.
Any help will be appreciated.