I filled a database with nodes (CREATE) through a python script using neo4j.GraphDatabase.
Every 1000 entries I finished the session and transaction and began a new one.
This worked fine for a continuous set of ~30.000 nodes.
Then I tried it on a much larger dataset.
After ~1.5 million nodes the python script did not go forward anymore. Checking the task manager showed Java running with 100% cpu.
As of my understanding this should not happen(?)
My code looks about like this:
driver = GraphDatabase.driver(uri, auth=(user, password)) repeat_3000_times: session = driver.session() transaction = session.begin_transaction() transaction.run("CREATE ( ... )") # 1000 times transaction.commit() session.close()
Is this a memory issue within Neo4j?