Bulk Import to Neo4j Cluster without Server Access

Hi all,

I've been working with neo4j on my local using docker containers. It has been very easy and seamless to use the admin import with csv files. I am working to move my project to production servers and I am receiving some pushback from the database team. They do not want to give me access to the neo4j servers, just the database.

So, I wanted to reach out to this community to ask some questions. Is there any other options to use for bulk import, lets say >1,000,000 records, without access to the base neo4j servers?

From my understanding, the load csv commands still require the files to be in the import directory unless certain database properties are updated. With the security risks associated with exposing a database to outside import, it seems unlikely that this would be used. Is it also a possiblity to temporarily expose the database to outside import and then updating this property afterwards to close off outside import?

Thanks for the help

Load CSV allows files to be read from an http server, so you can host them and read them remotely.

Here are two examples from the documentation:

//Example 1 - website
LOAD CSV FROM 'https://data.neo4j.com/northwind/customers.csv'

//Example 2 - Google
LOAD CSV WITH HEADERS FROM 'https://docs.google.com/spreadsheets/d//export?format=csv'

You can use Call Subquery in Transaction to make it more efficient for large datasets.

APOC also has a load json method that takes a url, so if you can convert to a json format this may also be an option.