Which is better for evaluating FastRP embeddings similarity with using cosine distance or euclidean distance?

Hi everyone,

I am working on a huge graph consisting of 230M nodes and 600M edges. I trained embeddings with using built-in FastRP algorithm. The final objective is finding similar nodes with reference pivot embedding. When evaluating embeddings, cosine similarities are observed very similar even nodes are not similar in real life perspective. When I got similarity with using euclidean distance results have become more logical. I dont understand which approach is better for finding similar nodes ? And, Why cosine similarities are too similar? Is it related to training algorithm ?

Thanks.