We have a dataset requiring more storage than available per CPU. At the same time, our use case is not as CPU intensive. At the moment we are using 60% storage on a 10 CPU/48GB memory/96GB storage instance. Our CPU utilization is consistently under 5%.
We seem to need no more than 2 CPUs, 16GB of Memory, but need between 64GB and 96GB of storage. Our users are only using a portion of the data at a time, so a small amount of paging is almost to be expected under ideal circumstances.
How can we get an AuraDB instance with a 2 CPU/16 GB/ 96GB combination?
I just wanted to do a huge thumbs up on this post because it's a big issue for me as well. Especially if you are storing string embeddings in the Neo4j DB or any kind of real data, you end up needing HUGE storage but TINY memory and CPU. This is not an option in AuraDS and you end up needing to self-host...
I shared this post with Aman Singh, the Product Manager for the self serve product. Given that Aman's goal is to ensure they are building the right thing for the right customers, this feedback should provide guidance on a use case that seems presently unsupported:
Large foundational datasets with a sparse matrix of user activity interacting with it.
We are migrating to memgraph. Resolving this issue and enabling triggers will bring us back to Neo4j.