Currently there are 313k files and it is taking a lot of memory of around 670GB to cache all the data. Is there a way to reduce the system configuration with same set of data ? I am new to this, please put some light on this.
Can we use Redis here ? Or do I need to split the data into multiple instances ?
Current system config: 94 cores and 750GB
@shreeraj0405
313k files?
Can you provide more detail.
What version of Neo4j is in play here?
>>taking a lot of memory of around 670GB to cache all the data
is you database 670GB large? how have you configured dbms.memory.pagecache.size
Hey @dana_canzano , thanks for the reply.
We have the data of around 623G, neo4j version is 4.2.6 and the dbms.memory.pagecache.size is 690G. Kindly suggest on how it can be handled better ? Redis or any other way ?
PS: Dataset which is being loaded is in CSV format