Updating database from a really large CSV

I have a database with about 30K nodes. For simplicity, let's suppose it looks like

appID
1
2
3

I want to update my database with information from a much larger csv file (19M rows) that looks like

appId userID
1 400
2 450
5 300

So, the end result will be

appId userID
1 400
2 450
3

May I get help in how to do this efficiently? I've tried the below, but it is taking a very, very, very long time (like, nearly infinite).

Many thanks.

LOAD CSV WITH HEADERS FROM 'file:///otherFile.csv' as row
CALL {
with row
match (a:application {appID: row.appID})
merge (u:UserId {userId:row.userIs})
merge (a)-[:connectsTo]->(u)
};
  1. Do you have indexes defined on appID (for label application) and userId (for label UserId)
  2. Add “in transactions of 10000 rows” to your call subquery, so it batches your updates. 10,000 is a starting value. You may experiment with larger values.

Both appID and userID have uniqueness constraints on them, and thus indexes.

When using “in transactions of 10000 rows”, I get the error message "A query with 'CALL { ... } IN TRANSACTIONS' can only be executed in an implicit transaction, but tried to execute in an explicit transaction."

Many thanks.

Begin the query with “:auto”