There are different memory settings you can configure in Neo4j. The way you're approaching this looks to me like it should work, but the issue is that you're configuring the page cache. The page cache controls how much of the graph is kept hot in memory. It doesn't help you with large transactions run against the database.
For an overview of memory segmentation, please check this site -- it will explain the different kinds of memory and what they do. I suspect you need to configure the heap to be larger to help with a client transaction. The page cache helps speed queries up, but doesn't affect how large they can be.
Just wondering, if this is happening in local with test data (taken from the real data we need to process), and having in account that in production we could have bigger data sets or just more traffic, I guess this is an indicative that we should consider also setting this value in neo4j server right?
Correct. The same settings apply whether it's server or embedded. Memory sizing is a very important thing to do with any database product, and "out of memory" errors are a dead give-away that it hasn't been properly tuned for the workload.