I'm attempting to export my DB as cypher and on an earlier version of neo4j (3.3.9) when I set
streamStatements=true, batchSize=100 splits the response into multiple rows. When I run the same on 3.4 or 3.5 I get only 3 rows regardless of what I set the batchSize to. With the older version, I could execute the cypher by row to successfully run all the queries without using all the memory on the target DB, however with the later versions I can't successfully run the largest of the 3 individual queries.
Am I doing something wrong, or misunderstanding the effect of
batchSize? Is there a way to split the cypher in later Neo4j/APOC versions so that it works as it did in 3.3.9?
match (n) return count(n)
I know this isn't the best way to create a backup, however I'm looking for a short term risk mitigation approach as I don't have access to the instances and our backups managed by the vendor are EBS volume snapshots, so I have no independent backup that I could restore to another site if necessary. We're working on getting that in place, but I want to have at least a snapshot that I can control.
Worth noting that with the response in 3.3.9 I get 101 rows back with individual cypher queries that process fine in a separate, relatively small instance. So iterating over 101 queries is definitely workable since this is really only a short-term risk mitigation approach.
Well seems exporting it with bacthsize is the right idea. For my case the database only added the nodes and not creating relationships. So i opened the cypher script and removed some lines like BEGIN and CREATE INDEXES . So the database created the nodes and relationships. I later added the indexes manually. There's is a slight difference in cypher and apoc as you move from 3x to 3.5.
I was wondering whether I had misunderstood what batchSize was intended to do.
format: 'plain' the exported cypher doesn't include the
It seems that
batchSize works as I anticipated in 3.3.9, but doesn't seem to have any effect in later versions.