I'm looking for some advice regarding best practices for populating a database upon initial build of a docker container. I have a docker-compose file that launches an instance of neo4j-4.2.0-community, alongside a python application that will be interacting with the database. I have a cypher script, and a csv file, that together populate the db with appropriate test data. My question is then:
What is the recommended way to store/execute the script/csv file such that it is of least encumbrance to collaborators? That is, ideally I don't need to tell them to find the csv file in version control and paste it into the neo4j browser prompt. I also don't want to track the database in version control.
As side note that may or may not be relevant; I noticed that in neo4j browser there is a tab for local scripts Is there a way I can mount this as a volume so that when a collaborator creates the containers, they can simply click on the saved script to populate the db? That would be sufficient and actually preferable to having the script run directly upon startup or something similar.
Any and all help is much appreciated and please let me know if you want further information.