Can the cypher query MATCH p=(n: customer) - [r*1. {max_hops}] -> (m: customer) In › return nodes (p) as nodes, relationships(p) as relationships, length(p) as hops be memory optimized but yielding the same result?
not really and because of the 'generic'ness and 'global'ness of the query.
Find me a node with label :customer. You have 100k :customer nodes then it needs to examine all 100k :customer nodes. Then for each of these 100k :customer nodes traverse any relationship (i.e. that could be a :IS_SUBCONTRACTOR_OF, or :IS_PARENT_OF, or :IS_CHILD_OF, or any of the N relationship types hung off a :customer node and then make up on {max_hops} connections to another :customer node.
Are you able to provide any more specifics to the query so as to reduce the amount of work performed
I have a dataframe which has 5 columns , payer , payee , weight , count , amount , weights were created for each payer payee relations using the transaction counts and amounts from the payer to the payee , feeding this dataframe to neo4j , i am creating a graph , assigning communities to each of the payer payee nodes using louvain algorithm , but while running hop analysis to find the 3 hops relationships in the graph , getting memory error , although it runs for a graph with small number of nodes