I believe that importing data from RDBMS or Parquet files is the real challenging work and more than that, the complex task is to create nodes automatically based on the rows as mentioned below.
A row is a node
A table name is a label name
A join or foreign key is a relationship
I am self learning Neo4j and i am having very difficult time to understand that how i can import data in a such a manner that my nodes are creating automatically based on import schema & data and then latter create relationships after creating all nodes or better way to create relationships at the time of creating nodes.
Create relationships using the keys in the nodes (and using MATCH to get the nodes created in step 1.)
The tricky part is with many-to-many relationships that can occur in a Relational DB. Those use a many-to-many table which can be a pain to build into a relationship.
Try it with a small amount of data first. You’ll probably make a lot of trial by error attempts, so it makes no sense spending a lot of time loading a huge amount of data just to discover it’s all wrong.
Thanks for your reply, I have 10GB patient data in MySQL and might start with small batch. I was looking for some robust way of doing. but i think i have to to do it in small piece by piece.
Are you planning to create the exact data model as your RDBMS ? If so, why are you maintaining the same data model ?
I am not creating or replicating exact data model of MySQL DB. I want to bring those data from MySQL DB into Graph Database based on newly created Graph Model.
What are you query models ?
I didn't undertsood this question, please elaborate.
Are you working some self learning or POC ?
Yes
Is your MySQL a production database ? or just a demo database for your self learning ?
Dummy data for self learning\POC
If you are maintaining the same data model, is MySQL have performance issues ?
No.