Docker disappears when creating random graph

Hi: I'm trying out Neo4J 4.1 Community Edition and I run into a hurdle that I can't seem to fix.
I'm creating a random graph with the following Python program. Basically the program
creates random edges between a set of 'max_nodes' Nodes. Per node you can have
10 different edges, and per edge one or more other nodes. The program below
works perfectly well up to 170,000 nodes, but then it crashes Neo4J.

I'm running this in Docker where I give the Neo4J container about 20 Gig (on a 64 Gig machines)
Below I show first the trace at the point where Python dies and below that the output of Docker stats.
Note that the memory of 5 Gig is still well below 20. Also note that the Docker container completely
disappears and cannot be restarted.

Question 1: Is there anything other parameter that I need to set in my Docker configuration (included at the
bottom) or in the neo4j.conf. The text on the conf file seems to suggest that if you don't set
the maximum heap the heap is unlimited.

Question 2: Am I wrong in assuming that Python does commit at every write_transaction? I can see the database
grow in the neo4j browser so that would seem to indicate that transactions are committed.

[Also: I also have a version of this that uses LOAD CSV and it also dies around 2 M edges, yes: Docker
also disappears]

cheers, J

;;;;;;;;; Random Graph Generator


from neo4j import GraphDatabase
import random
from datetime import datetime

uri = "neo4j://localhost:7687"

driver = GraphDatabase.driver(uri, auth=("neo4j", "xyzzy"))

max_predicates = 10 
max_nodes = 1000000
max_values_per_predicate = 3 

def send_string(tx,qstr):
    tx.run(qstr)

# create an index
with driver.session() as session:
    session.write_transaction(send_string,
                              "create index NodeIDIndex for (n:Node) on (n.id)")

def create_unique_random_int(dict,max):
   new = random.randint(0,max)
   if dict.get(new):
      # print('recursing')
      return(create_unique_random_int(dict,max))
   else:
      dict[new]=True
      return(new)

            
def fill_edges():
    # delete existing edges
    with driver.session() as session:
        session.write_transaction(send_string,"match (n) detach delete n")
    for s in range(0,max_nodes):    
        if (s % 100) == 0:
           print(datetime.now(), s)
        nodes=[]
        mem={s : True}
        strings="merge (subject:Node {id: 'id%d'}) " % s
        c = 0
        for p in range(0,max_predicates):
            for i in range(0,max_values_per_predicate):
                nodes.append([p, create_unique_random_int(mem,max_nodes)])
        for pair in nodes:
            o = pair[1]
            strings=strings+ "merge (node%d:Node {id: 'id%d'}) " % (o,o)
        for pair in nodes:
            p = pair[0]
            o = pair[1]
            strings=strings+ "create (subject)-[:KNOWS {name: 'pred%d'}]->(node%d) " % (p,o)
        with driver.session() as session:
            session.write_transaction(send_string,strings)

            
fill_edges()


;;;;;;;;;;; output of Python

2020-08-26 23:03:13.305878 178800
2020-08-26 23:03:22.045828 178900
2020-08-26 23:03:30.854331 179000
2020-08-26 23:03:39.780870 179100
2020-08-26 23:03:49.464563 179200
2020-08-26 23:03:58.344138 179300
2020-08-26 23:04:07.224920 179400
2020-08-26 23:04:16.339982 179500
Failed to read from defunct connection IPv4Address(('localhost', 7687)) (IPv4Address(('127.0.0.1', 7687)))
Transaction failed and will be retried in 1.148549033051224s (Connection closed during commit)
Unable to retrieve routing information
Transaction failed and will be retried in 2.1229540615383877s (Unable to retrieve routing information)
Unable to retrieve routing information
Transaction failed and will be retried in 3.500658827086268s (Unable to retrieve routing information)
Unable to retrieve routing information
Transaction failed and will be retried in 9.199626645310065s (Unable to retrieve routing information)
Unable to retrieve routing information
Transaction failed and will be retried in 14.475375532071432s (Unable to retrieve routing information)
Unable to retrieve routing information
Traceback (most recent call last):
File "", line 1, in
File "", line 23, in fill_edges
File "/home/test/anaconda3/envs/jans/lib/python3.8/site-packages/neo4j/work/simple.py", line 403, in write_transaction
return self._run_transaction(WRITE_ACCESS, transaction_function, *args, **kwargs)
File "/home/test/anaconda3/envs/jans/lib/python3.8/site-packages/neo4j/work/simple.py", line 334, in _run_transaction
raise errors[-1]
File "/home/test/anaconda3/envs/jans/lib/python3.8/site-packages/neo4j/work/simple.py", line 306, in _run_transaction
self._open_transaction(access_mode=access_mode, database=self._config.database, metadata=metadata, timeout=timeout)
File "/home/test/anaconda3/envs/jans/lib/python3.8/site-packages/neo4j/work/simple.py", line 247, in _open_transaction
self._connect(access_mode=access_mode, database=database)
File "/home/test/anaconda3/envs/jans/lib/python3.8/site-packages/neo4j/work/simple.py", line 116, in _connect
self._connection = self._pool.acquire(access_mode=access_mode, timeout=self._config.connection_acquisition_timeout, database=database)
File "/home/test/anaconda3/envs/jans/lib/python3.8/site-packages/neo4j/io/init.py", line 872, in acquire
address = self._select_address(access_mode=access_mode, database=database)
File "/home/test/anaconda3/envs/jans/lib/python3.8/site-packages/neo4j/io/init.py", line 845, in _select_address
self.ensure_routing_table_is_fresh(access_mode=access_mode, database=database)
File "/home/test/anaconda3/envs/jans/lib/python3.8/site-packages/neo4j/io/init.py", line 828, in ensure_routing_table_is_fresh
self.update_routing_table(database=database)
File "/home/test/anaconda3/envs/jans/lib/python3.8/site-packages/neo4j/io/init.py", line 801, in update_routing_table
raise ServiceUnavailable("Unable to retrieve routing information")
neo4j.exceptions.ServiceUnavailable: Unable to retrieve routing information

;;;;;;;;; output Docker stats

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0937f297e695 testneo4j 201.15% 5.853GiB / 20GiB 29.26% 558MB / 69.8MB 0B / 0B 61
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0937f297e695 testneo4j 185.17% 5.853GiB / 20GiB 29.26% 558MB / 69.9MB 0B / 0B 61
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0937f297e695 testneo4j 185.17% 5.853GiB / 20GiB 29.26% 558MB / 69.9MB 0B / 0B 61
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0937f297e695 testneo4j 116.77% 5.853GiB / 20GiB 29.26% 558MB / 69.9MB 0B / 0B 61
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0937f297e695 testneo4j 116.77% 5.853GiB / 20GiB 29.26% 558MB / 69.9MB 0B / 0B 61
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS

docker run \
       --name testneo4j \
       --memory 20g \
       --cpus 4 \
       -p7474:7474 -p7687:7687 \
       -d \
       -v $HOME/neo4j/data:/data \
       -v $HOME/neo4j/logs:/logs \
       -v $HOME/neo4j/import:/var/lib/neo4j/import \
       -v $HOME/neo4j/plugins:/plugins \
       --env NEO4J_dbms_security_allow__csv__import_ \
       --env NEO4J_AUTH=neo4j/xyzzy \
       --env NEO4J_dbms_connector_https_advertised__address="localhost:7473" \
       --env NEO4J_dbms_connector_http_advertised__address="localhost:7474" \
       --env NEO4J_dbms_connector_bolt_advertised__address="localhost:7687" \
                  neo4j:latest

This sounds like you're getting hit by a known issue Crashes with `malloc(): invalid size (unsorted)` · Issue #12564 · neo4j/neo4j · GitHub.
TL;DR make sure your pagecache size is configured explicitly in 4.1.1 - this will be fixed in 4.1.2.

Regarding 2) Yes a commit is done for each write transaction.