Convert of "USING PERIODIC COMMIT" to a "CALL" with LOAD CSV

This code does the job, but it looks like this method will be deprecated soon. Should use CALL function instead.

I'm a neo4j baby.. Can anyone point me in right direction on how to convert a

"USING PERIODIC COMMIT" to a "CALL"

// IMPORT ECLASS
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM 'file:///eclass12-EN.csv' AS row
MERGE (e:Eclass { standard: row.Standard, versionNumber: row.VersionNumber, isoLanguageCode: row.ISOLanguageCode, codedName: row.CodedName, idcc:row.IdCC, preferredName: row.PreferredName, definition: COALESCE(row.Definition, ''), mkKeyword: COALESCE(row.MKKeyword, ''), irdicc: row.IrdiCc });

kasipasi_1-1664443560094.png

You can find more details here. Moreover, you should only put the property that bears the constraint in the MERGE clause and other properties should be set in the SET clause.

LOAD CSV WITH HEADERS FROM 'file:///eclass12-EN.csv' AS row 
CALL {
    WITH row
    MERGE (e:Eclass {idcc:row.IdCC}) 
    SET e += {
        standard: row.Standard, 
        versionNumber: row.VersionNumber, 
        isoLanguageCode: row.ISOLanguageCode, 
        codedName: row.CodedName, 
        preferredName: row.PreferredName, 
        definition: COALESCE(row.Definition, ''), 
        mkKeyword: COALESCE(row.MKKeyword, ''), 
        irdicc: row.IrdiCc 
    }
} IN TRANSACTIONS OF 1000 ROWS;

Regards,
Cobra

Hi Cobra,

Thank you for your swift reply! :slightly_smiling_face: I'm getting this error.

A query with 'CALL { ... } IN TRANSACTIONS' can only be executed in an implicit transaction, but tried to execute in an explicit transaction.

I tried this and it worked, but not sure what method is best to you. I will be importing 200k records soon form a CSV file so I would like to understand what is best pratice. :slightly_smiling_face:

:auto LOAD CSV WITH HEADERS from 'file:///eclass12-EN-50.csv' AS row
CALL {
    with row
MERGE (e:Eclass { standard: row.Standard, versionNumber: row.VersionNumber, isoLanguageCode: row.ISOLanguageCode, codedName: row.CodedName, idcc:row.IdCC, preferredName: row.PreferredName, definition: COALESCE(row.Definition, ''), mkKeyword: COALESCE(row.MKKeyword, ''), irdiCc: row.IrdiCC })
}
IN TRANSACTIONS OF 10 ROWS;

For CSV files with less than 10 million records, you can use the LOAD CSV. Moreover, you should have a query to create nodes and another one to create relationships. Don't forget to create UNIQUE CONSTRAINTS before loading nodes.

If you want to load larger datasets, you will have to look at bulk import.