Initial thoughts when looking at your script you are running is you need to use different quotes for your strings in the script and the quotes that are around your entire script. I would suggest changing the quotes that are around the entire cypher script to double quotes and the ones around the strings to single quotes (I like using double quotes around my whole script, you could do it the other way if you prefer). This changes the script to look like this...
Thank you for the answer. The query seems to load and throws no errors, but the execution is taking too long. It still hadn't completed in 4+ hours. The previous file (300k rows) completed in about an hour. This file however contains (700k rows). Can you also guide as to what else should I do? I have already increased the heap size and reduced the batch size further.
@cobra I was wondering about the parsing of the date and time. Do you have another way to get this data properly from the csv to the database as a datetime without the parsing?
I did load the previous csv file (300k rows) without parsing the date and time. But I require queries which would be able to handle the date, time columns in their appropriate format and not as "Strings". Is there another way around if this if taking too much time. Can these columns be parsed later on?