I have a cluster with 2 very large databases. 600 million+ nodes and 2 billion+ relationships each.
I want to drop one of them and am terrified to use the simple Drop Database Command in this production environment. Has anyone had experience with pulling each server out of the cluster and deleting the target database manually?
Will the drop command bring the whole cluster down?
a. What verison of Neo4j is in play here?
b. Do you have more details of the concern and/or experience which would suggested drop database .... is not the way to go. I'm also confused since a drop gets rid of a database so even if it fails, admittedly not great, but the net net is you want to get rid of something.
c. deleteing the target database manually. And how would this be achieved? Bu deleteing files from under data/ ? If so this would leave you in a totally unsupported experience.
Firstly, I appreciate your thoughtful questions. Thank you.
We're on 5.16.0 Enterprise
My experience which gives me pause is some very unexpected behavior with Indexes. For example. Typically adding an index usually takes a few milliseconds and then it builds over time. But I've seen the Create index pause all writes on the entire cluster until the index was completely built. Additionally, I've seen Drop indexes take many minutes and/or fail completely and stop all writes in the same way.
The method for re-creating an index manually is to pull the server out of the cluster and delete the file associated with the index. Then restarting Neo4j will rebuild the index.
Overall, large cluster-wide operations concern me when there's no documentation on expected behavior in larger environments like this.
If I drop the database and it stops all writes for an extended period of time, it litterally brings our business to a stop and that's not acceptable in a SAAS environment.