cancel
Showing results for 
Search instead for 
Did you mean: 

Head's Up! Site migration is underway. Phase 2: migrate recent content

CrashLoopBackOff while Deploying Standalone Instance

melihaydogd
Node Link

The only pod terminates with error code 1 and I cannot look into logs because of following error.

 

Consider unsetting SECURE_FILE_PERMISSIONS environment variable, to enable docker to write to /logs.

Folder /logs is not accessible for user: 7474 or group 7474 or groups 7474, this is commonly a file permissions issue on the mounted folder.

Hints to solve the issue:

1) Make sure the folder exists before mounting it. Docker will create the folder using root permissions before starting the Neo4j container. The root permissions disallow Neo4j from writing to the mounted folder.

2) Pass the folder owner's user ID and group ID to docker run, so that docker runs as that user.

If the folder is owned by the current user, this can be done by adding this flag to your docker run command:

  --user=$(id -u):$(id -g)

Also, kubectl describe command does not give any info. How can I determine the cause of error? I followed every step in this link.

1 ACCEPTED SOLUTION

Now that I realize, I made some changes but I do not know how my changes made it work. It was a complete coincidence. Here is what I did:
1. I edited the YAML file in the GKE interface. I deleted the following properties:

securityContext:
runAsGroup: 7474
runAsNonRoot: true
runAsUser: 7474

and

securityContext:
fsGroup: 7474
fsGroupChangePolicy: Always
runAsGroup: 7474
runAsNonRoot: true
runAsUser: 7474

2. After editing YAML file, it did not work. Pods had warnings.
3. However, after uninstalling the helm package and installing again. It started to work.

I hope it helps. 🙂

View solution in original post

4 REPLIES 4

dstclair
Node

Hey ,
Are you on GKE? I am having the very same problem.  There is some discussion
https://community.neo4j.com/t5/neo4j-graph-platform/helm-kubernetes-error-while-following-standalone...
where the use of init containers to prepare the volume with correct perms. Or use the stable channel The stable channel didn't work for me and even if it did I am not sure if sticking to a particular version of kubernetes would be a long term fix.

Yes, I am on GKE. Yesterday, I tried deploying again and it worked. I did not make any changes. I have no idea why it worked.

Now that I realize, I made some changes but I do not know how my changes made it work. It was a complete coincidence. Here is what I did:
1. I edited the YAML file in the GKE interface. I deleted the following properties:

securityContext:
runAsGroup: 7474
runAsNonRoot: true
runAsUser: 7474

and

securityContext:
fsGroup: 7474
fsGroupChangePolicy: Always
runAsGroup: 7474
runAsNonRoot: true
runAsUser: 7474

2. After editing YAML file, it did not work. Pods had warnings.
3. However, after uninstalling the helm package and installing again. It started to work.

I hope it helps. 🙂

dstclair
Node

I haven't got the init containers working, yet, but whilst I do, I was able to have it run as root user. That won't do but it proves that everything else is good.

 

securityContext:
runAsNonRoot: false
runAsUser: 0
runAsGroup: 0