I have neo4J DB running on the cloud. I was using the data importer tool to build the graph model or schema. Generating cypher scripts from data importer has been very helpful so far but I noticed I had to modify the queries to make it work on my machine. I have attached an example below and any explanation would be very helpful.
:param {
// Define the file path root and the individual file names required for loading.
// https://neo4j.com/docs/operations-manual/current/configuration/file-locations/
file_path_root: 'file:///', // Change this to the folder your script can access the files at.
file_0: 'class.csv',
file_1: 'student.csv'
};
// CONSTRAINT creation
// -------------------
//
// Create node uniqueness constraints, ensuring no duplicates for the given node label and ID property exist in the database. This also ensures no duplicates are introduced in future.
//
// NOTE: The following constraint creation syntax is valid for database version 4.4.0 and above.
CREATE CONSTRAINT `ClassID_Class_uniq` IF NOT EXISTS
FOR (n: `Class`)
REQUIRE (n.`ClassID`) IS UNIQUE;
CREATE CONSTRAINT `StudentID_Student_uniq` IF NOT EXISTS
FOR (n: `Student`)
REQUIRE (n.`StudentID`) IS UNIQUE;
:param {
idsToSkip: []
};
// NODE load
// ---------
//
// Load nodes in batches, one node label at a time. Nodes will be created using a MERGE statement to ensure a node with the same label and ID property remains unique. Pre-existing nodes found by a MERGE statement will have their other properties set to the latest values encountered in a load file.
//
// NOTE: Any nodes with IDs in the 'idsToSkip' list parameter will not be loaded.
LOAD CSV WITH HEADERS FROM ($file_path_root + $file_0) AS row
WITH row
WHERE NOT row.`ClassID` IN $idsToSkip AND NOT toInteger(trim(row.`ClassID`)) IS NULL
CALL {
WITH row
MERGE (n: `Class` { `ClassID`: toInteger(trim(row.`ClassID`)) })
SET n.`ClassID` = toInteger(trim(row.`ClassID`))
SET n.`Label` = row.`Label`
SET n.`Leader` = row.`Leader`
SET n.`BuildingNumber` = toInteger(trim(row.`BuildingNumber`))
SET n.`StudentID` = toInteger(trim(row.`StudentID`))
} IN TRANSACTIONS OF 10000 ROWS;
LOAD CSV WITH HEADERS FROM ($file_path_root + $file_1) AS row
WITH row
WHERE NOT row.`StudentID` IN $idsToSkip AND NOT toInteger(trim(row.`StudentID`)) IS NULL
CALL {
WITH row
MERGE (n: `Student` { `StudentID`: toInteger(trim(row.`StudentID`)) })
SET n.`StudentID` = toInteger(trim(row.`StudentID`))
SET n.`Name` = row.`Name`
SET n.`ClassID` = toInteger(trim(row.`ClassID`))
} IN TRANSACTIONS OF 10000 ROWS;
// RELATIONSHIP load
// -----------------
//
// Load relationships in batches, one relationship type at a time. Relationships are created using a MERGE statement, meaning only one relationship of a given type will ever be created between a pair of nodes.
LOAD CSV WITH HEADERS FROM ($file_path_root + $file_0) AS row
WITH row
CALL {
WITH row
MATCH (source: `Student` { `StudentID`: toInteger(trim(row.`StudentID`)) })
MATCH (target: `Class` { `ClassID`: toInteger(trim(row.`ClassID`)) })
MERGE (source)-[r: `IS_IN`]->(target)
} IN TRANSACTIONS OF 10000 ROWS;
The modifications I made to get it working on my VM were the following
:param {
idsToSkip: []
}; // i had to remove this block and add idsToSkip to the first block where I defined my parameters.
:param {
// Define the file path root and the individual file names required for loading.
// https://neo4j.com/docs/operations-manual/current/configuration/file-locations/
file_path_root: 'file:///', // Change this to the folder your script can access the files at.
file_0: 'class.csv',
file_1: 'student.csv',
idsToSkip: []
};
The second error I had was the following
Neo.DatabaseError.Statement.ExecutionFailed: a query with CALL { ... } IN TRANSACTIONS
can only be executed in an implicit transaction, but tried to execute in an explicit transaction.
CALL {
WITH row
MATCH (source: `Student` { `StudentID`: toInteger(trim(row.`StudentID`)) })
MATCH (target: `Class` { `ClassID`: toInteger(trim(row.`ClassID`)) })
MERGE (source)-[r: `IS_IN`]->(target)
} IN TRANSACTIONS OF 10000 ROWS;
I had to remove every "IN TRANSACTIONS OF 10000 ROWS" to get it working.