Memory settings for big databases

Hello Andrew.
I have run the same optimised query today and it was running for 155 secs for the first time then 2nd and 3rd were for 5 -10 secs.

MATCH (o:Resource)-[:ns10__P2044]->(h:Resource)
USING index h:Resource(ns0__quantityAmount)
WHERE h.ns0__quantityAmount > 8000
MATCH (o)<-[:ns8__P2044]-(r:Resource)-[:ns1__P31]->(end:Resource{uri:"http://www.wikidata.org/entity/Q8502"})
//USING JOIN ON end
RETURN r.rdfs__label, h.ns0__quantityAmount as height, r.ns1__P625
ORDER BY height desc

It seems that last time it was cached and even our optimised query runs really slow.
I have connected to the jvm with jmx and was able to see that there is a lot of free space in the heap and processor activity is really low (1-2%). Overall server seems not loaded except for io, obviously there are a lot of reads.
Maybe the file sizes will be helpful:

8,0K    ./neostore
4,4M    ./neostore.counts.db.a
4,4M    ./neostore.counts.db.b
4,0K    ./neostore.id
3,0G    ./neostore.labelscanstore.db
8,0K    ./neostore.labeltokenstore.db
4,0K    ./neostore.labeltokenstore.db.id
32K     ./neostore.labeltokenstore.db.names
4,0K    ./neostore.labeltokenstore.db.names.id
14G     ./neostore.nodestore.db
4,0K    ./neostore.nodestore.db.id
56K     ./neostore.nodestore.db.labels
4,0K    ./neostore.nodestore.db.labels.id
55G     ./neostore.propertystore.db
8,0K    ./neostore.propertystore.db.arrays
4,0K    ./neostore.propertystore.db.arrays.id
4,0K    ./neostore.propertystore.db.id
136K    ./neostore.propertystore.db.index
4,0K    ./neostore.propertystore.db.index.id
576K    ./neostore.propertystore.db.index.keys
4,0K    ./neostore.propertystore.db.index.keys.id
137G    ./neostore.propertystore.db.strings
4,0K    ./neostore.propertystore.db.strings.id
3,8G    ./neostore.relationshipgroupstore.db
4,0K    ./neostore.relationshipgroupstore.db.id
98G     ./neostore.relationshipstore.db
4,0K    ./neostore.relationshipstore.db.id
64K     ./neostore.relationshiptypestore.db
4,0K    ./neostore.relationshiptypestore.db.id
456K    ./neostore.relationshiptypestore.db.names
4,0K    ./neostore.relationshiptypestore.db.names.id
128K    ./neostore.schemastore.db
4,0K    ./neostore.schemastore.db.id
255M    ./neostore.transaction.db.2575
110M    ./neostore.transaction.db.2576

Most of the index files are less than 200MB (95%), but some are a few gigs.
Maybe the fact that I run neo4j in the official docker container can also help.

Do you have any ideas why it can be so slow?

Can you PROFILE the query again, and expand all elements of the plan?

Even the profile ran for 69 sec

Hello Andrew,

do you have any ideas why the query with the execution plan above is so slow?

If you have query logging turned on you can observe things like cache hits vs misses. That info is actually available to you from PROFILE, but it's not shown on the browser profile plan. You can hit the Code result view button and look for PageCacheHits and PageCacheMisses, see what that says.

It has no information about cache, probably because I use community edition.

I've tried to enable query logging with dbms.logs.query.enabled=true, but it doesn't log anything

Okay, to get the most out of a PROFILE, try running it in cypher-shell. The textual representation is tough to read, but it provides more info than the browser PROFILE.

I get:

+-----------------------------------------------------------------------------------------+| Plan      | Statement   | Version      | Planner | Runtime       | Time | DbHits | Rows |+-----------------------------------------------------------------------------------------+| "PROFILE" | "READ_ONLY" | "CYPHER 3.5" | "COST"  | "INTERPRETED" | 3174 | 909717 | 16   |+-----------------------------------------------------------------------------------------+

+-----------------------+----------------+--------+---------+-----------+----------------------------------------------------------------------------------+----------------------------+----------------------------------------------------------------------------------------------------+
| Operator              | Estimated Rows | Rows   | DB Hits | Cache H/M | Identifiers                                                                      | Ordered by                 | Other                                                                                              |
+-----------------------+----------------+--------+---------+-----------+----------------------------------------------------------------------------------+----------------------------+----------------------------------------------------------------------------------------------------+
| +ProduceResults       |              0 |     16 |       0 |       0/0 | r.rdfs__label, r.ns1__P625, anon[143], height, anon[19], end, h, r, anon[171], o | height DESC                |                                                                                                    |
| |                     +----------------+--------+---------+-----------+----------------------------------------------------------------------------------+----------------------------+----------------------------------------------------------------------------------------------------+
| +Projection           |              0 |     16 |      32 |       0/0 | r.rdfs__label, r.ns1__P625, anon[143], height, anon[19], end, h, r, anon[171], o | height DESC                | {r.rdfs__label : r.rdfs__label, height : cached[h.ns0__quantityAmount], r.ns1__P625 : r.ns1__P625} |
| |                     +----------------+--------+---------+-----------+----------------------------------------------------------------------------------+----------------------------+----------------------------------------------------------------------------------------------------+
| +Filter               |              0 |     16 |      94 |       0/0 | anon[143], anon[19], cached[h.ns0__quantityAmount], end, h, r, anon[171], o      | h.ns0__quantityAmount DESC | end:Resource                                                                                       |
| |                     +----------------+--------+---------+-----------+----------------------------------------------------------------------------------+----------------------------+----------------------------------------------------------------------------------------------------+
| +Expand(All)          |              0 |     78 |     147 |       0/0 | anon[143], anon[19], cached[h.ns0__quantityAmount], end, h, r, anon[171], o      | h.ns0__quantityAmount DESC | (r)-[anon[171]:ns1__P31]->(end)                                                                    |
| |                     +----------------+--------+---------+-----------+----------------------------------------------------------------------------------+----------------------------+----------------------------------------------------------------------------------------------------+
| +Filter               |              0 |     69 |      69 |       0/0 | anon[143], anon[19], cached[h.ns0__quantityAmount], h, r, o                      | h.ns0__quantityAmount DESC | r:Resource                                                                                         |
| |                     +----------------+--------+---------+-----------+----------------------------------------------------------------------------------+----------------------------+----------------------------------------------------------------------------------------------------+
| +Expand(All)          |              0 |     69 |     138 |       0/0 | anon[143], anon[19], cached[h.ns0__quantityAmount], h, r, o                      | h.ns0__quantityAmount DESC | (o)<-[anon[143]:ns8__P2044]-(r)                                                                    |
| |                     +----------------+--------+---------+-----------+----------------------------------------------------------------------------------+----------------------------+----------------------------------------------------------------------------------------------------+
| +Filter               |             66 |     69 |      69 |       0/0 | anon[19], cached[h.ns0__quantityAmount], h, o                                    | h.ns0__quantityAmount DESC | o:Resource                                                                                         |
| |                     +----------------+--------+---------+-----------+----------------------------------------------------------------------------------+----------------------------+----------------------------------------------------------------------------------------------------+
| +Expand(All)          |             66 |     69 |  454618 |       0/0 | anon[19], cached[h.ns0__quantityAmount], h, o                                    | h.ns0__quantityAmount DESC | (h)<-[anon[19]:ns10__P2044]-(o)                                                                    |
| |                     +----------------+--------+---------+-----------+----------------------------------------------------------------------------------+----------------------------+----------------------------------------------------------------------------------------------------+
| +NodeIndexSeekByRange |          39080 | 454549 |  454550 |       0/0 | cached[h.ns0__quantityAmount], h                                                 | h.ns0__quantityAmount DESC | :Resource(ns0__quantityAmount) > {  AUTOINT0}                                                      |
+-----------------------+----------------+--------+---------+-----------+----------------------------------------------------------------------------------+----------------------------+----------------------------------------------------------------------------------------------------+

I get 0/0 for cache hit/misses :frowning:

Hmm...with your current model this is probably the best you can get, as the real selectivity is that second (from the bottom...cypher-shell query plans are read from the bottom-up) Expand(All) step going from 454549 rows to 16. The relationships off of those nodes act as the filter.

Out of curiosity what does this return?

MATCH ()-[r:ns10__P2044]->()
RETURN count(r)

The count is 1 661 172 , not so many. It is really strange that it takes about 200 seconds. Index seek should be very fast, the bottleneck must be in Expand.

Do you think using enterprise version will improve performance? What gain can we get from it?

I'd sooner look at the memory settings, specifically your heap and pagecache settings in your neo4j.conf, as well as the total memory on your machine. If you don't have enough memory available then you're going to have a lot of cache misses when doing that expand.

Here's another query to try, though I'm doubting that it will improve on the other:

MATCH (h:Resource)
USING index h:Resource(ns0__quantityAmount)
WHERE h.ns0__quantityAmount > 8000 AND ()-[:ns10__P2044]->(h)
MATCH (o:Resource)-[:ns10__P2044]->(h)
MATCH (o)<-[:ns8__P2044]-(r:Resource)-[:ns1__P31]->(end:Resource{uri:"http://www.wikidata.org/entity/Q8502"})
//USING JOIN ON end
RETURN r.rdfs__label, h.ns0__quantityAmount as height, r.ns1__P625
ORDER BY height desc

Sorry @andrew_bowman for the late response. The memory settings should be fine.
I have installation with 18G of heap and 54G of page cache.
I have monitored heap with visualvm and can say it has a lot of free space. The process's memory on the machine is about 75G, so all the settings are set right.

I have noticed that neostore.relationshipstore.db is 98G (I have sent the size of all the db files in one of the previous messages). I may suspect that Expand works with this file. Is it right? Should we have page cache comparable to its size?

It's not correlated in that way. When you expand from a node, it's only looking at the relationships connected on that node, so with the rows going from 454549 to 69, we can infer there are only 69 total relationships of that type from amongst that set of nodes. The number and types and directions of those relationships are on the nodes themselves, so it won't actually hit the relationship store until it has to expand the relationships found, so it will only be hitting the relationship store for those 69 relationships. The filtering based on the presence of the relationship types is happening on the node record itself during expansion. If anything you could look at the node store.

We can try out a test to see if the index lookup and filtering is the bottleneck (I suspect it is from the db hits we're seeing).

Try getting node ids from this:

MATCH (h:Resource)
USING index h:Resource(ns0__quantityAmount)
WHERE h.ns0__quantityAmount > 8000 AND ()-[:ns10__P2044]->(h)
RETURN collect(id(h)) as ids

And then try the query starting from the nodes with these ids. That timing will be without the node lookups and filtering. If that's fast, but your other query is slow, then the index lookups and filtering are the bottleneck.

MATCH (end:Resource{uri:"http://www.wikidata.org/entity/Q8502"})
UNWIND $ids as id
MATCH (h)
WHERE id(h) = id
MATCH (o:Resource)-[:ns10__P2044]->(h)
MATCH (o)<-[:ns8__P2044]-(r:Resource)
WHERE (r)-[:ns1__P31]->(end)
RETURN r.rdfs__label, h.ns0__quantityAmount as height, r.ns1__P625
ORDER BY height desc

We'll want to take a look at the expanded plan for this one.

Sorry, I had troubles with my neo4j instance.

Here is the explain plan for the second query:


It takes about 9 seconds running it for the first time and about 50 ms afterwards.
The first query ran for about 4 mins.

That makes sense. The initial index lookup and filtering is the bottleneck.

If there's anything you can do in your modeling to improve this, such as by adding a special label on nodes that fit this criteria (or at least those with an incoming :ns10__P2044 relationship), that may help.

I don't think I can add a labels for all the criteria I have. I will think more of how can I improve performance.

Do you have an idea why is the second query with a few hundreds of db hits takes about 9 seconds (I think it is quite a lot)?

I have investigated a bit more in depth.
I have run the query:

MATCH (o:Resource)-[:ns10__P2044]->(h:Resource)
USING index h:Resource(ns0__quantityAmount)
WHERE h.ns0__quantityAmount > 8000
MATCH (o)<-[:ns8__P2044]-(r:Resource)-[:ns1__P31]->(end:Resource{uri:"http://www.wikidata.org/entity/Q8502"})
//USING JOIN ON end
RETURN r.rdfs__label, h.ns0__quantityAmount as height, r.ns1__P625
ORDER BY height desc

and profiled neo4j with visialvm and figured out that most of the time (~ 48 out of ~49 sec) is spent on loading the relationships of the nodes (indeed it misses cache).


After watching the talk An overview of Neo4j Internals (it is a bit old, but as I have seen from a short glance on the sources, everything regarding the structure of the files on disk should still be the same) it seems to me that:

  • as you said before labeling should help to narrow down the amount of data used in query processing
  • non-cached search seems to be fast only if you can filter out most of the data based on indexes
  • fast random access disks should significantly improve non-cached queries

Please correct me if I am wrong.

@manonthegithub do you have server monitoring in place? If yes please take a note of your I/O throughput and operations values, disk latency and CPU iowait metrics while you execute any of your queries when they are not cached.

Then you can start think of how to put more memory in that machine of yours. :slight_smile:

There are two points.
The first is that it is definitely a lot of disk reads and io-waits when you do non-cached queries. And it is normal, you can’t avoid it. And also throughput itself does not really matter, what matters is fast disk seek and low latency, because the data is read not sequentially, but at arbitrary places.
The second point is that neo4j team suggests to warm-up your neo4j instance with the query which reads all the nodes and relationships to put some data into cache. So after warm-up it should be much faster and at this point amount of memory matters. Normally, you should start using the instance only after warm-up.