cancel
Showing results for 
Search instead for 
Did you mean: 

Neo4j GKE quick single instance deployment not working

tard_gabriel
Ninja
Ninja

Hi there

I tried to deploy Neo4j single instance on Google Kubernetes with the operation manual version 4.4 two times and it simply doesn't work, everything work in the process itself but the end result is always some weird errors in the cluster.

I have experience with self hosted Neo4j and virtual machine but never got something that hard to make it work. The concept of Kubernetes might be great but there is a lot of work to do before making it accessible to the public, business doesn't want to spend a lot of money just try to "install an app".

I suspect that there are some missing instructions in the creation of the cluster itself, all the errors I got are from the cluster.

-Can’t scale up nodes because node auto-provisioning is disabled, which prevents provisioning of node groups.
-Can’t scale up because node auto-provisioning can’t provision a node pool for the Pod if it would exceed resource limits.
-Pod is blocking scale down because it’s a non-daemonset, non-mirrored, non-pdb-assigned kube-system pod
-Pods unschedulable

I changed the cpu and memory for 2 CPU and 8Gi in the default my.neo4j.values.yaml provided in the quick setup that's it. I suspect that the created cluster in the quick guide could only deal with minimum resources which even of a quick setup is too limited.

Any lights is welcome

1 ACCEPTED SOLUTION

Hi Gabriel,

Thanks for reporting the issue to us.

We have taken a look and we were able to reproduce the same. The issue is occurring since the default instance type of GKE "e2-medium" is not sufficient to schedule a Neo4j Pod . We had encountered a similar issue with the cluster and have updated the documentation to use --machine-type "e2-standard-2" as the minimum instance type required to successfully try out the example given in the documentation.

We are in the process of updating the same for NEO4J standalone GKE and you should see the changes very soon. Meanwhile , can you please try creating the GKE cluster using the below command and try the quickstart guide again

gcloud container clusters create my-neo4j-gke-cluster --num-nodes=1 --machine-type "e2-standard-2"

Once again , thanks for reporting the issue to us. We highly appreciate your time and effort.

Thanks,
Harshit

View solution in original post

3 REPLIES 3

david_allen
Neo4j
Neo4j

Please try reporting which commands you ran, and exactly what error messages you have, along with your full values.yaml. It's tough to tell what's going on from the report you've provided.

Thanks David

Here is my yaml file
neo4j.values.yaml.txt (262 Bytes)

There are the commands I used

Some investigation and support I did with Google leads to this insight, the actual operation manual gives instructions and how to create a resources limited deployment. As soon as you go over 2cpu or 4Gio of memory the cluster created in the manual will not be able to handle it and will crash.

One way to fix it i think, would be to provide a bit more flexible instruction about the cluster creation in the quick deployment chapters so the cluster can handle bigger resources claims in the yaml file.

Hi Gabriel,

Thanks for reporting the issue to us.

We have taken a look and we were able to reproduce the same. The issue is occurring since the default instance type of GKE "e2-medium" is not sufficient to schedule a Neo4j Pod . We had encountered a similar issue with the cluster and have updated the documentation to use --machine-type "e2-standard-2" as the minimum instance type required to successfully try out the example given in the documentation.

We are in the process of updating the same for NEO4J standalone GKE and you should see the changes very soon. Meanwhile , can you please try creating the GKE cluster using the below command and try the quickstart guide again

gcloud container clusters create my-neo4j-gke-cluster --num-nodes=1 --machine-type "e2-standard-2"

Once again , thanks for reporting the issue to us. We highly appreciate your time and effort.

Thanks,
Harshit