Hi @berkay.coskuner98,
yeah, htop
will not be able to see inside the JVM and if it reports that the java process is using 250G of memory, it very well might be that most of it is free/available from inside the JVM.
You can try run this command from where you run htop otherwise:
jcmd $(jps | grep -E 'EntryPoint$' | cut -d' ' -f1) GC.heap_info
This should print something like
garbage-first heap total 360448K, used 220058K [0x0000000400000000, 0x0000000800000000)
region size 8192K, 21 young (172032K), 4 survivors (32768K)
Metaspace used 108491K, committed 109696K, reserved 1179648K
class space used 13505K, committed 14016K, reserved 1048576K
In this case, the total heap is 352M, of which 215M is being used.
Compared with the htop output, which shows 812M, which is the JVM heap and all the memory the JVM and GC need for themselves.
Doing the same sequence (create+fastrp+drop) twice, neo4j goes down because of the insufficent memory.
Well, that should work, ideally. Did you configure Neo4j in a certain way, especially around heap sizes, page cache sizes?
Our goal was returning back to initial memory usage after first sequence.
It won't be returning to the initial state, there's some caches of various sorts involved, that will retain some memory after the operation. And since, on the JVM, memory is reclaimed on demand and not when something is no longer needed, it might not go down at all immediately after dropping the graph. It should, however, eventually clean up the old graph when you do a second project+algo run.