Helm kubernetes error while following standalone install process

Hello. I'm quite new to Neo4j and I'm following this process:

When the pod is starting at this step, I have the error below about user 7474

I checked the parameters accessible in the helm setup (helm show values neo4j/neo4j-standalone) and none of the parameters I can change seems related to the below: I can't change the docker command and the docker env
Error logs from GKE UI:

Consider unsetting SECURE_FILE_PERMISSIONS environment variable, to enable docker to write to /metrics.

Folder /metrics is not accessible for user: 7474 or group 7474 or groups 7474, this is commonly a file permissions issue on the mounted folder.

Hints to solve the issue:

1) Make sure the folder exists before mounting it. Docker will create the folder using root permissions before starting the Neo4j container. The root permissions disallow Neo4j from writing to the mounted folder.

2) Pass the folder owner's user ID and group ID to docker run, so that docker runs as that user.

If the folder is owned by the current user, this can be done by adding this flag to your docker run command:

--user=$(id -u):$(id -g)

HI @XavPil

You have to create the docker container with a user that has access to all neo4j folder otherwise you will have problems like the one that you are showing here. You don't have to change the docker command but only to add create the container with a user (it can be a newly created user) before trying to install neo4j inside the container.

I have the same problem I don't quite understand what you mean, can you elaborate on the solution

Hi @heyongfeng,

What I mean is that you can try to add this in your .yaml file:

securityContext:

runAsNonRoot: true

runAsUser: 7474 # PID of neo4j user

runAsGroup: 7474 # GID of neo4j group

Watch out for the indentation that .yaml files need. I hope that this guides you to your specific solution.

Thank you for your suggestion, I am using version 4.4.6, the default is the neo4j user, I used initContainers to solve the above problems

initContainers:

- name: "init-neo4j"

image: "{{ $neo4jImage }}"

securityContext:

runAsGroup: 0

runAsUser: 0

command: ['sh']

args:

- "-c"

- |

chown -R neo4j:neo4j /logs

mkdir -p /metrics

chown -R neo4j:neo4j /metrics

mkdir -p /data

chown -R neo4j:neo4j /data

volumeMounts:

- mountPath: /data

name: data

subPathExpr: data

- mountPath: /logs

name: data

subPathExpr: logs/$(POD_NAME)

- mountPath: /metrics

name: data

subPathExpr: metrics/$(POD_NAME)

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

Remember that the initContainers will only exist before the creation of the database. They will do the work but you might need to have a more robust container (like the neo4j one) if you would like to make some more modification on the pods later on, while the db is up and running.

PS: I guess I have to get kudos or the solution :smiley:

FYI : the finding in my case

When you create a GKE, you can add the flag --release-channel "stable" to gcloud command, like below.

gcloud container clusters create my-neo4j-gke-cluster --num-nodes=1 --machine-type "e2-standard-2" --release-channel "stable"

Thank you!

We found the issue for our question: it is related to the version 1.22 of GKE which has issues with rights. We only faced this issue on GCP/GKE. AWS setup worked fine

Using this updated command in the Prerequisites page fixed our issue. We added --release-channel "stable" that will make the cluster to use v1.21

gcloud container clusters create my-neo4j-gke-cluster --num-nodes=1 --machine-type "e2-standard-2" --release-channel "stable"

Can we see your yaml file content?