While importing CSV data to neo4j, it got stuck in the node degrees. Waited for more than 2 hrs but no luck. Please find the below content which was printed while running the
MigrateMariaObjectsToNeo4j.py. No errors in neo4j.log also. Any debugging steps/alternative approach to solve this issue should be appriciated.
Os : centos 7.7
Neo4J versison: 3.2.12
Available resources:
Total machine memory: 23.39 GB
Free machine memory: 9.83 GB
Max heap memory : 5.65 GB
Processors: 1
Configured max memory: 3.76 GB
Nodes, started 2020-01-29 06:29:37.813+0000
[>:6.97 MB/s----------|NODE:7.|PROPERTIES------------------------------------|LA|v:8.07 MB/s-] 196K ∆10.9K
Done in 17s 340ms
Prepare node index, started 2020-01-29 06:29:55.312+0000
[DETECT:8.37 MB------------------------------------------------------------------------------] 200K ∆ 200K
Done in 688ms
Relationships, started 2020-01-29 06:29:56.159+0000
[>:22.69 MB/s------|TY|PREPARE------------------------------------------------------------|||] 191K ∆81.9K
Done in 3s 981ms Node Degrees, started 2020-01-29 06:30:00.329+0000
**[>:904.00 B/s--------------------------------------------------------------------------------] 0 ∆ 0 [*>:320.00 B/s--------------------------------------------------------------------------------] 0 ∆ 0 [*>:308.00 B/s--------------------------------------------------------------------------------] 0 ∆ 0 [*>:216.00 B/s--------------------------------------------------------------------------------] 0 ∆ 0
Yes stefan, I am using neo4j-admin import and below is the command.
neo4j-admin import --nodes /tmp/node_export_ne.csv --nodes /tmp/node_export_ep.csv --nodes /tmp/node_export_service.csv --nodes /tmp/node_export_connection.csv --nodes /tmp/node_export_subnet.csv --nodes /tmp/node_export_customer.csv --relationships /tmp/node_export_subnet_ne_rel.csv --mode=csv
I am able to ran the same command in other linux server, there it's able to convert neo4j objects properly. But in my VM, it's not happening . I had checked the neo4j.log and debug.log for any error/warning but it seems to be proper. Is there any additional parameter/setps to be enabled to find the root cause for it ?
I guess stuff is just taking up way more time. Import tool is optimized to use all available CPU power. Why not run the import on the more powerful server and move then the generated db folder over?
We are using very less number of data to migrate which shouldn't consume more cpu usage. However, I have increased cpu count in my VM and it is working fine. Thanks for replying.