I'm having trouble debugging what's going on in my workflow:
from neo4j import GraphDatabase
def get_results(db):
q = " ... my query ..."
driver = GraphDatabase.driver(uri, auth=("neo4j", "pass"))
db = driver.session(
with db.begin_transaction() as tx:
r = tx.run(q)
tx.success = True
for r in res:
process_res(r)
The for loop seems to randomly hang after processing a a few hundred thousand results. My process_res()
function is simple enough that I don't think it's causing any problems.
Is this the correct way to ingest millions of results, or is there a better way?