Where I might split up the data, is if a portion of your data fits the traditional RDBMS model and there's a TON of it.
E.g., suppose there is a huge amount of real-time log or transaction data that's very table oriented and you can run a PostgreSQL process that packages it up a summary of that data that then can periodically exported that into Neo4J. A mature RDBMS also has ways of being efficient with storage allocation, whereas Cypher is flexible, which makes it a less efficient. It's not clear if that would make a significant difference in your use case.
Otherwise, the attributes of Cypher can mimic tables pretty well (although Cypher doesn't come with the same level of safety features in a RDBMS to prevent "dumb" mistakes.). Cypher Enterprise does come with some CONSTRAINTS but it's not quite as good as RDBMS.
It would help if you specified the amount of data you are talking about... e.g. how many records, how many bytes per record, how fast does the data grow, etc.
The other thing, is how ad hoc are the queries going to be? Cypher is really great, when somebody starts wanting to make queries that nobody anticipated. In a RDBMS, going off the beaten path can become a nightmare.
The other thing, is if in the course of building out your schema, you discover something about the nature of the data that you hadn't anticipated. Most typically, it's all too easy to make simplifying assumptions to make a RDBMS schema easier, only to discover that there was a misunderstanding about the data that results in an unanticipated many-to-many relationship, which results in a schema migration plus newly re-formed queries that have to use ugly JOINs.