Hi Sammy -- thanks for the reply!
Makes total sense. I've spent the most of my time experimenting and mapping out different graph schema, getting a feel for how certain design decisions affect queries and impact the graph's ease-of-use for specific business questions/statements.
For my current project, we have a ton of related data scattered about different CSV files in various denormalized states, so a part of the process has been: "Ok, given I've designed a good graph data model, how should I first organize and tabularize all of our data assets so that I can then bulk load it into Neo4j?"
A related challenge, then, has been: "How do I then make sure the Cypher statements I am using to define nodes and relationships during this bulk load actually, in fact, match the data model."
Neo4j's uniqueness and existence constraints help on that last part, as well as using MERGE correctly. Check constraints on a property's values would be helpful, but I haven't found them to exist (maybe in APOC?), so I'm planning to get my hands dirty on a Py2Neo/Flask layer -- for the bulk load, sure, but more generally for new data that will arise, which other people might/will be in charge of updating the graph with. (Do you have any experience with that?)