Xiangrui Meng of Databricks recently posted this on the Apache Spark project users list:
Databricks and Neo4j contributors are looking to bring Cypher queries into the core Spark project as part of Spark 3.0 (slated for release mid-year 2019). This will build on elements from the Cypher for Apache Spark and Graphframes projects.
All the details are in the links in Xiangrui's e-mail.
It would be great to see Neo4j and Spark community users expressing their support for/adding feedback on this Spark Project Improvement Proposal before it goes for a vote in the Spark dev community.
The more detail you can provide on your interest in this, the better, but a simple +1 in reply to Xiangrui's post would be just great if you are short of time ...