Hello all,
I'm trying to run neo4J as a database for a nodeJs app.
For testing purposes I deployed a fork of the neo4J browser as a separate docker-image.
I also deployed neo4J and a corresponding service (NodePort).
The deployment file for neo4J and the service looks like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
generation: 2
name: test-neo4j
labels:
service: test-neo4j
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: test-neo4j
serviceName: test-neo4j
template:
metadata:
labels:
app: test-neo4j
spec:
containers:
- command:
- /bin/bash
- -c
- |
# Exported causal addresses are only necesseary for the enterprise version and if actually using causal clusters
# export NEO4J_dbms_connectors_default__advertised__address=$(hostname -f)
# export NEO4J_causal__clustering_discovery__advertised__address=$(hostname -f):5000
# export NEO4J_causal__clustering_transaction__advertised__address=$(hostname -f):6000
# export NEO4J_causal__clustering_raft__advertised__address=$(hostname -f):7000
if [ "${AUTH_ENABLED:-}" == "true" ]; then
export NEO4J_AUTH="neo4j/${NEO4J_SECRETS_PASSWORD}"
else
export NEO4J_AUTH="none"
fi
exec /docker-entrypoint.sh "neo4j"
env:
# This is only needed for the enterprise version
#- name: NEO4J_ACCEPT_LICENSE_AGREEMENT
# value: "no"
- name: NEO4J_dbms_mode
value: CORE
# Can be scaled up to four cores in the CE version
- name: NUMBER_OF_CORES
value: "1"
- name: AUTH_ENABLED
value: "false"
# Causal clustering is only available for the enterprise version
#- name: NEO4J_causal__clustering_discovery__type
# value: DNS
#- name: NEO4J_causal__clustering_initial__discovery__members
# value: product-category-neo4j.default.svc.cluster.local:5000
image: neo4j:3.4.5
imagePullPolicy: IfNotPresent
name: product-category-neo4j
ports:
# This is only needed if using a causal cluster and the cluster needs to discover its causal neighbours
# - containerPort: 5000
# name: discovery
# protocol: TCP
# This is only needed if using a causal cluster
# - containerPort: 7000
# name: raft
# protocol: TCP
# This is only needed if using a causal cluster and the transactions between clusters need to be managed
# - containerPort: 6000
# name: tx
# protocol: TCP
- containerPort: 7474
name: browser
protocol: TCP
- containerPort: 7687
name: bolt
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
# Specify which persistent volume to use and the mounting path
volumeMounts:
- mountPath: /data
name: datadir
- mountPath: /plugins
name: plugins
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: plugins
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
# This claims the persistent (non-floating) volume for the db
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: datadir
spec:
accessModes:
- ReadWriteOnce
dataSource: null
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: Service
metadata:
name: test-neo4j
labels:
service: test-neo4j
spec:
selector:
app: test-neo4j
ports:
- port: 7687
targetPort: 7687
protocol: TCP
name: bolt
- port: 7474
targetPort: 7474
protocol: TCP
name: http
type: NodePort
The .yml for the other outsourced neo4J browser looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: neo4j-browser
labels:
app: neo4j-browser
component: replica
spec:
replicas: 1
selector:
matchLabels:
app: neo4j-browser
template:
metadata:
labels:
app: neo4j-browser
spec:
containers:
- name: neo4j-browser
image: my-image
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
name: editor
restartPolicy: Always
The neo4J core is of course up and running, I tested both sending Cyhper Commands to the actual core and using the API.
If I attach to the container running the neo4J-browser image and call the neo4J API via a netcat command I get a response.
All of these are in the same cluster.
Sadly I cant connect from the browser itself, usually I get a timeout error.
I can't for the life of me figure out whats not working correctly.
I tested the whole setup local and it works flawlessly.
Is the service not correctly configured?