Hi community,
I'm looking information about coding using pyspark. I read GitHub - neo4j-contrib/neo4j-spark-connector: Neo4j Connector for Apache Spark, which provides bi-directional read/write access to Neo4j from Spark, using the Spark DataSource APIs and this article is mention pyspark, but all examples use syntax for scala.
Thanks for all help you can give me.