Read / write performance dramatically degrades with concurrent queries

  • Neo4j version: 3.5.13 enterprise
  • Driver: Java Bolt driver 1.7.5
  • Server: Single node - 8vCPUs - 64 GB RAM - Ubuntu 18.04
  • Heap: 24 GB
  • Pagecache: 28 GB

I am noticing that the performance of my reads / writes degrade very quickly when running concurrent queries over an extended period of time. I have a single node Neo4j server and 2 workers that run on different servers. Both the workers run 10 threads each, and process a steady stream of transactions. Ideally I would like to have more workers. For each transaction, a thread runs a few Cypher queries, a mix of read and write queries. The database size is currently ~3.5 GB (4M nodes, 10M relationships).

I have the slow query log enabled (threshold 100ms) and notice some read queries start showing in there pretty quickly. The same queries go from 150-200ms to more than 1s in the slow query log over a short period of time (a few minutes), and then stay in the 1.5s - 2s range. If I increase the number of worker threads this goes up even higher.

One of the example queries is the following:

match (ps:Persona)<-[:BELONGS_TO]-(n:Card {card_last_4: {card_last_4},card_bin: {card_bin}}) where id(ps)=93159 return id(n) as node_id

Here is the above query's profile:

Can using where id(ps)=x be the culprit here, although I am not sure why that should be an issue performance wise? My goal is to have a throughput of around 500-600/s but I am only reaching 8-10/s with the current performance.

@marvin-hansen I am running into a similar issue. Could you please share what did you end up using and how has it been with you after 2 years?
Thanks