Just reaching out to the community here for best practices. Using Neo4J and large volumes of data seem rather synonymous. When we're firing requests to Neo4J using the neo4j-rx-spring-boot our applications seem to get to a point where they aren't capable of formulating the response anymore. Though I haven't been able to observe an actual error yet - the connection is cut off, presumably due to running out of memory as I don't have the issue with smaller requests.
The following setting seems to help:
Perhaps also on a pod level (running the API in OpenShift using Docker), what are the numbers that I should be increasing if I'm getting unexplainable errors on large volumes?