Connection closed after few minutes: defunct connection Address

Hi,
I have a server in which the connection is opened (python):

    self._driver: Driver = GraphDatabase.driver(credentials["uri"], auth=(credentials["user"], credentials["password"]), max_connection_lifetime=3600*24*30, keep_alive=True)

I run some requests and it works all perfectly. The requests are run by using the function:

def __exe_query(self, query: str):
    with self._driver.session() as session:
        session.run(query).consume()

Despite these queries do not return anything relevant I added .consume() based on this post Error while loading data in Neo4j (Python Driver) - #7 by amitshipra.

I wait 4-5 minutes (while the server is running waiting for other requests), I run the exact same requests, and I get the error:

"Failed to read from defunct connection Address(host='****', port=**)"

I also tried with max_connection_lifetime=-1.

I am working with python 3.7.0 with these libraries versions: neo4j==1.7.2 neobolt==1.7.9 neotime==1.7.4 as suggested by Underlying Socket connection Gone error with Vesion 1.7.4 and Neo4j 3.5 · Issue #293 · neo4j/neo4j-python-driver · GitHub.
It does not work in Win10 and Linux Ubuntu 18.04.3 LTS.

Thanks for your help and attention,
Vittorio

Can you provide the queries you're running? It might be an issue there.

Hi,

I'm facing the same issue - is there any progress on this topic?

Thanks.

BR,
Tobias

I solved the issue by creating a parallel thread which is scheduled to fire a query every 30s (add and delete a node). This is because, if the connection is idle for more than 2-3 minutes, it will be closed (from my understanding).
Hope that's help.

Vittorio

I had the same problem and basically tried all of the methods mentioned scattered over google, plus some others.

Used Neo4j instead of py2neo, using write_transaction. This didn't work
Set the max_connection_lifetime, keep_alive parameters etc. This didn't work
Setting a parallel thread to periodically create / delete nodes. This didn't work
Created a method that split files into temporary chunks instead of using PERIODIC COMMIT. This didn't work

I had previously noticed in the task manager that memory allocation dropped heavily just before this error appeared - I initially thought that this was a connection issue rather than memory issue... But lo and behold, updating the memory allocations in the Neo4j settings to higher values was the only thing that managed to prevent this error from occurring.

1 Like