Failed to invoke procedure `apoc.import.csv`

I am going nuts. I have been trying to load data from a CSV in a newly installed versions of neo4j, browser, and desktop downloaded from the web on Wednesday. I have Win 10 installed on a Microsoft Surface. I run regular updates on the OS.

This is screenshot of the APOC statement and the error I am getting.

Config file:

dbms.directories.import=import
apoc.import.file.enabled=true

Can you please try to use file:///persons.csv instead?

Did this get resolved, I am still getting the same error.

I was following the example from
https://neo4j.com/docs/labs/apoc/current/import/import-csv/

Was this resolved later? i am getting same exception.

The persons.csv, knows.csv file mentioned in the above link the delimiter is pipe '|' and thus one must specify this delimiter in the Cypher. Here is the corrected query that works.

CALL apoc.import.csv(
  [{fileName: 'file:/persons.csv', labels: ['Person']}],
  [],{delimiter: '|', arrayDelimiter: ',', stringIds: false}
)

I am using default delimeter i.e. comma. however i had tried with specifying also, but same error.

neo4j> CALL apoc.import.csv([{fileName: 'file:/test_exp.csv', labels: ['MTR']}], , {})
;
Failed to invoke procedure apoc.import.csv: Caused by: java.util.NoSuchElementException: No value present
neo4j> CALL apoc.import.csv([{fileName: 'file:///test_exp.csv', labels: ['MTR']}], , {})
;
Failed to invoke procedure apoc.import.csv: Caused by: java.util.NoSuchElementException: No value present

neo4j> CALL apoc.import.csv([{fileName: 'file:/test_exp.csv', labels: ['MTR']}], , {delimiter: ','})
;
Failed to invoke procedure apoc.import.csv: Caused by: java.util.NoSuchElementException: No value present

neo4j> CALL apoc.import.csv([{fileName: 'file:/test_exp.csv', labels: ['MTR']}], , {delimiter: ',', arrayDelimiter: ',', stringIds: false});
Failed to invoke procedure apoc.import.csv: Caused by: java.util.NoSuchElementException: No value present

Please post one row of data with headers to further investigate the issue.

This is what i used to export the data. Same i tried to load the data. Load CSV works but i dont see any data.... where as import not even working.

WITH "MATCH (m:MTR) WHERE m.MR starts with '10432' return m" AS query
CALL apoc.export.csv.query(query, "test_exp.csv", {})
YIELD file, source, format, nodes, relationships, properties, time, rows, batchSize, batches, done, data
RETURN file, source, format, nodes, relationships, properties, time, rows, batchSize, batches, done, data;

Header of CSV looks like below-
"m"
"{""id"":110220,""labels"":[""MTR""],""properties"":{""CREATED_BY"":""ADAPTER"",""CREATETIMESTAMP"":1579784846045,""MODIFIED_BY"":""ADAPTER"",""DATETIME"":202001230807,""UPDATETIMESTAMP"":1579784846045,""MR"":""1043262"",""PAYMENTATTEMPTFLAG"":""1""}}"

Exported file is not in the correct format. 

Try this for Export:

WITH "MATCH (m:MTR) WHERE m.MR starts with '10432' return m.CREATED_BY, m.CREATETIMESTAMP, m.MODIFIED_BY, m.DATETIME, m.UPDATETIMESTAMP, m.MR, m.PAYMENTATTEMPTFLAG" AS query
CALL apoc.export.csv.query(query, "/test_exp.csv", {})

This saves the csv file in import folder;

Run this Cypher to create the node:

LOAD CSV WITH HEADERS FROM "file:/test_exp.csv" AS row
with row

MERGE (m:MTR {CREATED_BY: row.CREATED_BY, CREATETIMESTAMP: row.CREATETIMESTAMP, MODIFIED_BY: row.MODIFIED_BY, DATETIME: row.DATETIME, UPDATETIMESTAMP: row.UPDATETIMESTAMP, MR: row.MR, PAYMENTATTEMPTFLAG: row.PAYMENTATTEMPTFLAG })

RETURN m

Thanks, i will try this, but the problem with above statement is that it has fixed properties but i may have different set of properties, they vary from 6 to 30 columns, how can i can include them? any suggestion on that.

Also if i want to include all node and its relationship in a single query, how will it get done.
for example- is it doable?
WITH "MATCH(m:MTR{MR:'201032862'})-[r]-(n) return m,n,r" AS query
CALL apoc.export.csv.query(query, "test_exp_727_1.csv", {})

@trevor.miles did you ever get it working. I have the same issue and not able to get it to work.

It's better to change:

dbms.directories.import=import

to

dbms.directories.import=ABSOLUTE_PATH_TO_DIRECTORY

The default makes the import directory buried deep in a subdirectory specific to the instance of the DB.
And you do have to specify the file as:

file:///FILE_RELATIVE_TO_IMPORT_DIRECTORY