I've been trying to load large amounts of data using the python driver for neo4j as part of an ETL script that I am running. However I have been hitting dbms.memory.transaction.total.max
errors at a certain point in the script.
I have several transactions running like similar to how the docs show:
def write_users(tx: Transaction, users):
result = tx.run(query, users=users)
result.consume()
session.execute_write(write_users, users=[some_large_list])
I have tried batching the list into smaller sets and running each batch individually, but the error message always remains the same. Is there something I'm missing about clearing the transaction data out of memory, or is there something I should be doing like using a new instance of with driver.session(database="neo4j") as session
for each transaction or creating and closing the session between transactions?
I appreciate the help!