TransactionMemoryLimit error

Hello,
New to neo4j, we are getting TransactionMemoryLimit error continually in one of the environment. we are not aware the exact reason for this issue. actually, we not configured any for the following fields. this are having default values.
db.memory.transaction.max
db.memory.transaction.total.max

Complete error:
neo4j.exceptions.TransientError: {code: Neo.TransientError.General.TransactionMemoryLimit} {message: Can't allocate extra 512 bytes due to exceeding memory limit; used=2147483648, max=2147483648}

While running below query getting the TransactionMemoryLimit error:
2024-08-28 13:14:38.667+0000 ERROR 545000 ms: 1367912 B - 25815241 page hits, 0 page faults - bolt-session bolt neo4j-python/4.4.3 Python/3.8.10-final-0 (linux) client/10.10.7.41:52594 server/10.100.2.35:7687> neo4j - neo4j - USING PERIODIC COMMIT 100000 LOAD CSV WITH HEADERS FROM "http://10.100.2.8:8333/bpuaa/neo4jMigration/neo4jCSVs/netomnia_export_alarms_ep_rel.csv" as row MATCH (parent:NETOMNIA_EP {networkElementId:row.networkElementId, sourceName:row.sourceName, sourceId: row.sourceId}) WITH parent,row MATCH (child:ALARM{sequenceNumber : toInteger(row.sequenceNumber)}) WITH parent,child MERGE (parent)-[r:HAS_ALARM]->(child) ON CREATE SET r.mergeTime = timestamp() ON MATCH SET r.mergeTime = timestamp() - {} - runtime=pipelined - {} - Can't allocate extra 512 bytes due to exceeding memory limit; used=2147483648, max=2147483648

Memory details for neo4j DB:
dbms.memory.pagecache.size=4G
dbms.memory.heap.max_size=16G
dbms.memory.heap.initial_size=8G

Could you reduce the PERIODIC COMMIT number to 10000 or 1000, rather than one hundred thousand? If you have a lot of dense nodes, then writing all those nodes and relationships may be clogging up the memory.

An alternative is to look at Cypher's CALL {} IN TRANSACTIONS or even APOC's apoc.periodic.iterate procedure for batching more efficiently.

1 Like