Reliability with Neo4J- Is there a way to configure neo4j to not crash?

I am evaluating neo4j for production usage. During my playing around with neo4j it crashed (OutOfMemory exceptions) many times due to non-optimized queries.

I understand that perhaps there's not enough memory. In other databases I've used the server doesn't just crash, but instead slows down or terminates a specific query/transaction. Neo4j on the other hand simply shuts down.

Is there a way to configure neo4j to not crash?

Hi yes,
you can configure several options, that we e.g. have also enabled in sandbox:

  1. transaction and global memory limits, the global memory limit should be around 70% of the configured heap, and if you know what the concurrency of your statements is you can also configure the per-statement memory limit if you don't have outlier queries
  2. transaction timeouts, limit queries to at most X seconds runtime and abort them after

Those settings should be default for new installations but unfortunately aren't yet.

Hi,
I understand there are ways to configure the memory for better etc.
However, my question is what happens when memory is out.
For example, in Java programming it's a common practice to wrap events inside threads so that if a thread runs out of memory the main service remains running. Currently it seems neo4j crashes when out of memory which is not optimal for a variety of operations (transactional, recommendations, financial).
I prefer a query terminated or slowed down than the whole system simply goes into shutdown

Yes exactly what you said in the last statement is what you get with the configured memory limits.
It basically terminates queries that exceed the allowed memory allocation.

In Java when the JVM runs out of memory it will go into an unreliable state.

That's why you need to make sure it doesn't happen.
In neo4j we do memory tracking in most places, and this is where the memory limits come in.

The only thing you can isolate with threads are exceptions (not errors) i.e. if a thread dies because of an exception the rest of the engine will continue to run.

But the failure to allocate memory is a critical one and will cause a more substantial issue for the JVM.

I see. Okay let me try that than. Excellent that this setting will be setup by default.
Sidenote, seems there's a typo on the site perhaps. It's showing
db.memory.transaction.total when I think the correct value is dbms.memory.transaction.total (dbms)?

Also, I think there's something wrong with this value. The settings comments state that

'# Limit the amount of memory that all of the running transaction can consume.
'# The default value is 70% of the heap size limit.

I am wondering how my server crashed from a transaction with this default setting in place?

It's not set by default, you have to uncomment the line.