I have a question about Cypher requests and the update of a database. I have a python script that does web scrapping and generate a csv at the end. I use this csv to import data in a neo4j database.
The scrapping is done 5 times a day. So every time a new scraping is done the csv is updated, new data is added to the the previous csv and so on. I import the data after each scraping. Actually when I import the data after each scraping to update the DB, I have all the nodes created again even if it is already in the DB.
For example the first csv gives 5 rows and I insert this in Neo4j. Next the new scraping gives 2 rows so the csv has now 7 rows. And if I insert the data I will have the first five rows twice in the DB. I would like to have everything unique and not added if it is already in the database.
For example when I try to create node ARTICLE I do this:
CREATE (a:ARTICLE {id:$id, title:$title, img_url:$img_url, link:$link, sentence:$sentence, published:$published})
I think MERGE instead of CREATE should solve the solution, but it doesn't and I can't figure it out why.
How can I do this ?