An apoc.periodic.iterateSpark that offloads batch jobs to Apache Spark

All integrations I have seen between Spark and Neo4j put Spark in the driver's seat, where Spark queries Neo4j, transforms the data it receives, and possibly sends it back to Neo4j.

I would prefer to have the experience we already have today with APOC's stored procedure Periodic Iterate, but instead of having it use bandwidth from a Neo4j server it sends jobs to a Spark cluster. This would put Neo4j in the driver's seat. You issue a query that processes many rows of data, a procedure named something like apoc.periodic.iterateSpark sends data and instructions to a Spark cluster endpoint, and results are returned to the procedure.

Is there a way, something like this, to offload parallelizeable workloads to Spark from Neo4j?

You could write your own custom procedure, which is what apoc procedures are. You can then do whatever you need.