Default query_size is 1000. I plan to make it 10,000.
What are the memory implications.
Only memory_pagecache_size is the variable which needs to be played with?
Default query_size is 1000. I plan to make it 10,000.
What are the memory implications.
Only memory_pagecache_size is the variable which needs to be played with?
why 10k? not something we typically see in the field? do you expect 10k unique queries and when eliminating for parameters? i.e.
match (n:Person) where n.age=21 return n;
and
match (n:Person) where n.age=22 return n;
and
match (n:Person where n.age=$param1 return n; {param1: 42}
would effectively only be 1 entry in the query plan cache?
What problem are you trying to solve by increasing the query_plan_cache?
and Only memory_pagecache_size is the variable which needs to be played with?
are you speaking of dbms.memory.pagecache.size which is completely different. This parameter represents the amount of RAM to allocate so as to attempt to fit the graph into RAM for querying. In a perfect world if your graph was for example132G large then having a dbms.memory.pagecahce.size=135G would insure all queries are reading data from RAM as opposed to the file system. See Memory configuration - Operations Manual
@dana_canzano We re doing network modeling and could have quite a few combinations. So trying to access if query_size is made 10,000, what attributes should we look for to check memory usage.
ok... but I might think before I increase dbms.query_cache_size to 10k you would want to see if there was excessive planning going on. Has this been done? dbms.logs.query.time_logging_enabled would at least let you know from a query.log if there was excessive planning.
As to measuring the implications of going to 10k honestly not sure since we generally do not see it changed in real world implementations
Right now no particular reason. We just want to ensure most of the query hits is from cache. We will keep to default since memory usage to higher number is right now is unknown.