So I have the embarrassing issue of having lost account access to the server some important research data is hosted on, in a Neo4j instance. Remote backup is enabled but I've had some problems with that and, as a data science student, I want to get some experience with other databases, not just Neo, much as I love it. I have a large dataset that in this case is probably more suited for a more traditional database anyway, as everything is one kind of node and I haven't really used connections much.
In any case, since I can access the database from N4J Desktop, Web, and Cypher Shell, but I cannot access the file system on the server, so I can't make use of APOC's export feature, I've switched (hey! Three dead Matrix characters in two sentences!) to getting my data out as a giant CSV or JSON export from N4J Desktop.
The problem is that the CSV and JSON are invalid. MongoDB and only MongoDB seems to be able to process the JSON with the help of some third party tools; JSON doesn't validate when run through JSON Lint or the similar tools in VSCode. CSV has a problem in that newlines are improperly escaped for a large text field that contains paragraphs.
Does anyone have any idea what I could do in this situation? As it stands, the best idea I can think of is to import the JSON into Mongo and then export from that and import its output into my real destination (Postgres), but I'm not sure why Neo Desktop has this undocumented export feature but exports invalid content.
Thanks for any help!
EDIT: as a note, I have tried using the stream no file feature of the Apoc export feature and the dataset is simply too large and hangs. If there were a way to guarantee that I got all the data but in chunks, I could try that; is there a way to export nodes by ID?