'No space left on device' when pre-allocating log file even though there is plenty of disk space

I have a neo4j database that after an import process sometimes ends up in read-only mode. Log message is that this is due to lack of disk space, but there is plenty of space.

debug.log shows errors like this:

2025-11-16 16:49:13.336+0000 INFO  [o.n.k.d.Database] [neo4j/b6e51a80] Rotated to transaction log [/var/lib/neo4j/data/transactions/neo4j/neostore.transaction.db.15178] version=15177, last transaction in previous log=26512770, rotation took 313 millis, started after 1564 millis.
2025-11-16 16:49:16.572+0000 INFO  [o.n.k.d.Database] [neo4j/b6e51a80] Rotated to transaction log [/var/lib/neo4j/data/transactions/neo4j/neostore.transaction.db.15179] version=15178, last transaction in previous log=26512891, rotation took 511 millis, started after 2725 millis.
2025-11-16 16:49:19.807+0000 INFO  [o.n.k.d.Database] [neo4j/b6e51a80] Rotated to transaction log [/var/lib/neo4j/data/transactions/neo4j/neostore.transaction.db.15180] version=15179, last transaction in previous log=26512978, rotation took 1172 millis, started after 2063 millis.
2025-11-16 16:49:22.641+0000 INFO  [o.n.k.d.Database] [neo4j/b6e51a80] Rotated to transaction log [/var/lib/neo4j/data/transactions/neo4j/neostore.transaction.db.15181] version=15180, last transaction in previous log=26513047, rotation took 170 millis, started after 2664 millis.
2025-11-16 16:49:23.983+0000 INFO  [o.n.k.d.Database] [neo4j/b6e51a80] Rotated to transaction log [/var/lib/neo4j/data/transactions/neo4j/neostore.transaction.db.15182] version=15181, last transaction in previous log=26513107, rotation took 115 millis, started after 1227 millis.
2025-11-16 16:49:25.291+0000 INFO  [o.n.k.d.Database] [neo4j/b6e51a80] Rotated to transaction log [/var/lib/neo4j/data/transactions/neo4j/neostore.transaction.db.15183] version=15182, last transaction in previous log=26513175, rotation took 104 millis, started after 1204 millis.
2025-11-16 16:49:26.879+0000 INFO  [o.n.k.d.Database] [neo4j/b6e51a80] Rotated to transaction log [/var/lib/neo4j/data/transactions/neo4j/neostore.transaction.db.15184] version=15183, last transaction in previous log=26513269, rotation took 103 millis, started after 1485 millis.
2025-11-16 16:49:30.664+0000 ERROR [o.n.k.i.t.l.f.LogFileChannelNativeAccessor] [neo4j/b6e51a80] Warning! System is running out of disk space. Failed to preallocate log file since disk does not have enough space left. Please provision more space to avoid that. Allocation failure details: ErrorCode=28, errorMessage='No space left on device'
2025-11-16 16:49:30.665+0000 ERROR [o.n.k.i.t.l.f.LogFileChannelNativeAccessor] [neo4j/b6e51a80] Switching database to read only mode.
2025-11-16 16:49:30.674+0000 INFO  [o.n.c.Config] server.databases.read_only changed to neo4j, by Dynamic failover to read-only mode.
2025-11-16 16:49:30.796+0000 INFO  [o.n.k.d.Database] [neo4j/b6e51a80] Rotated to transaction log [/var/lib/neo4j/data/transactions/neo4j/neostore.transaction.db.15185] version=15184, last transaction in previous log=26513347, rotation took 2413 millis, started after 1504 millis.
2025-11-16 16:49:31.340+0000 WARN  [o.n.k.a.p.GlobalProcedures] Error during iterate.commit:
2025-11-16 16:49:31.341+0000 WARN  [o.n.k.a.p.GlobalProcedures] 168 times: org.neo4j.graphdb.WriteOperationsNotAllowedException: No write operations are allowed on this database. The database is in read-only mode on this Neo4j instance.
2025-11-16 16:49:31.342+0000 WARN  [o.n.k.a.p.GlobalProcedures] Error during iterate.execute:
2025-11-16 16:49:31.342+0000 WARN  [o.n.k.a.p.GlobalProcedures] 168 times: No write operations are allowed on this database. The database is in read-only mode on this Neo4j instance.

There is plenty of space

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.6G  1.3M  1.6G   1% /run
/dev/vda1       194G  138G   57G  72% /
tmpfs           7.9G  1.1M  7.9G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
/dev/vda15      105M  6.1M   99M   6% /boot/efi
tmpfs           1.6G   16K  1.6G   1% /run/user/1000

And I'm not keeping logs

$ grep tx_log neo4j.conf
db.tx_log.rotation.retention_policy=keep_none

Database size looks like this:

~/data$ du -sh *
81G	databases
4.0K	server_id
435M	transactions

@alanbuxton

Is there any detail on what version of Neo4j is in play here?

>I have a neo4j database that after an import process ….

can you describe the ‘import process’ .
is this a LOAD CSV? a neo4j-admin database import or something else?

Thanks for your super-fast response.

It's 5.20.0 (community edition).

The import process loads some files with neosemantics and then does various post-processing of the data.

The failure is happening when I'm updating some Geo objects with their country etc, in this python code (using neomodel)

def update_geonames_locations_with_country_admin1(version):
    '''
        Add country code and admin1 (state/province) into node for easier querying
    '''
    query = """MATCH (n: Resource&GeoNamesLocation)
                WHERE n.countryCode IS NULL 
                AND n.countryList IS NULL
                AND n.featureClass IS NULL
                RETURN * LIMIT 1000"""
    while True:
        res, _ = db.cypher_query(query, resolve_objects=True)
        if len(res) == 0:
            logger.info("No more entities found, quitting")
            break
        for objs in res:
            obj = objs[0]
            geonamesid = obj.geoNamesId
            res = get_geo_data(geonamesid, version)
            if res is None:
                raise ValueError(f"No cached geo data for {geonamesid}")
            else:        
                obj.countryCode = res["country"]
                obj.admin1Code = res["admin1"]
                obj.featureCode = res["feature"]
                obj.countryList = res["country_list"]
                obj.save()

It's at some point when obj.save() is called that the error is raised.

I'm not 100% sure whether the DB is going into read-only mode during calling this method, or whether the previous step is what tipped it over the edge. The previous step is checking through unique Ids that were assigned to objects in case of any duplicates.