Hello yarucho!
What might help for your use case is the Label Propagation algorithm. The docs:
At the beginning of the algorithm computation, every node is initialized with a unique label, and the labels propagate through the network.
However, this algorithm allows you to specify the label that the node has to begin with. That is, if you previously ran the algorithm and found nodes A, X, and Z were in community 15, you set their value of seedProperty to 15 when you run the algorithm.
This will speed up the computation because you are giving it a starting place based on known information.
More information:
Note: Louvain allows for the same thing, so you might try that as well. (WCC does as well, but that doesn't sound like the algorithm you want.)
To your other question, I haven't seen any LSH, but the idea behind that, if I recall, is that similar vectorizations end up in the same hash bucket. The closest equivalent to that in Neo4j GDS would be node embeddings. For example, FastRP will vectorize a node with its neighbors where similar nodes end up nearby in vector space, even after "aggressive dimensionality reduction."
There's a very cool example notebook using both FastRP and KNN:
https://github.com/neo4j/graph-data-science-client/blob/main/examples/fastrp-and-knn.ipynb
I'm not 100% sure about how useful this would be to you, but I have a thought that if you ran FastRP, saved the vectorization as a property/parameter, and used the same random seed with only the new nodes (and their neighbors/properties, I think that would that work?), you might be able to get something useful going.
Hope that helps, and happy graphing!
-V.Rupp