I wanted to get some information on what is the recommended method to update the Neo4j graph node values after doing some processing on Apache Spark.
For my use case,I have a graph in Neo4j, I want to run a Spark Graphx's Pregel algorithm on the Neo4j graph data.
I am planning to use the Cypher For Apache Spark(CAPS) to read graph from neo4j and run Graphx's Pregel on it.
One possible way to do this as mentioned in the example Neo4jMergeExample.scala (https://github.com/opencypher/cypher-for-apache-spark/blob/master/spark-cypher-examples/src/main/scala/org/opencypher/spark/examples/Neo4jMergeExample.scala).
This requires us to create a copy of the original graph from Neo4j in the sparkSession.
Apply the updates on top of this copy and then use the Neo4jGraphMerge.merge() to update the graph back to Noe4j.
Is there a way to do this without creating a copy of the original Noe4j graph in the CAPS Session?
What is the recommended method to update the results from spark back to Neo4j ?