Http 7474 does not work when I set cluster settings

I installed neo4j enterprise on 3 Ubuntu servers. All work fine over http browser. However, when i enable Casual Cluster settings, http 7474 port goes off and can't connect to it. The cluster seems to be working as I see connection established between the nodes as seen in the attached screenshot:

http 7474 port goes off

off? ???? can you provide more details of off

Does a

netstat -i | grep 7474

report any output?

Is this only an issue with connecting to :7474 from a remote browser or can you connect to :7474 locally, for example with curl

returns nothing. And also I can't curl it as well.

I can't connect to the mentioned port locally or remotely.

can you attach neo4j.log and or debug.log

Ok attached are the files

debug.txt (2.4 MB) neo4j.txt (9.6 KB) security.txt (1.1 KB)

A causal cluster with less than 1GB of total RAM. Yeah it should work but thats awfully small.

Also from the debug.log

2020-12-23 09:36:49.306+0000 INFO [o.n.m.MetricsExtension] Initiating metrics...
2020-12-23 09:36:49.339+0000 INFO [c.n.c.d.SslHazelcastCoreTopologyService] Cluster discovery service starting
2020-12-23 09:36:49.452+0000 INFO [c.n.c.d.SslHazelcastCoreTopologyService] My connection info: [
	Discovery:   listen=174.138.22.22:5000, advertised=174.138.22.22:5000,
	Transaction: listen=174.138.22.22:6000, advertised=174.138.22.22:6000, 
	Raft:        listen=174.138.22.22:7000, advertised=174.138.22.22:7000, 
	Client Connector Addresses: bolt://174.138.22.22:7687,http://174.138.22.22:7474,https://174.138.22.22:7473
]
2020-12-23 09:36:49.453+0000 INFO [c.n.c.d.SslHazelcastCoreTopologyService] Discovering other core members in initial members set: [128.199.161.205:5000, 174.138.22.22:5000, 188.166.182.210:5000]
2020-12-23 09:36:49.693+0000 INFO [o.n.c.c.c.l.s.RecoveryProtocol] Skipping from index -1 to 1.
2020-12-23 09:36:49.716+0000 INFO [o.n.c.c.c.l.s.SegmentedRaftLog] log started with recovered state State{prevIndex=0, prevTerm=0, appendIndex=7}
2020-12-23 09:36:49.720+0000 INFO [o.n.c.c.c.m.RaftMembershipManager] Membership state before recovery: RaftMembershipState{committed=MembershipEntry{logIndex=0, members=[MemberId{2a8e7ebb}, MemberId{ba2034ac}, MemberId{b39c99b8}]}, appended=null, ordinal=0}
2020-12-23 09:36:49.720+0000 INFO [o.n.c.c.c.m.RaftMembershipManager] Recovering from: 0 to: 7
2020-12-23 09:36:49.721+0000 INFO [o.n.c.c.c.m.RaftMembershipManager] Membership state after recovery: RaftMembershipState{committed=MembershipEntry{logIndex=0, members=[MemberId{2a8e7ebb}, MemberId{ba2034ac}, MemberId{b39c99b8}]}, appended=null, ordinal=0}
2020-12-23 09:36:49.722+0000 INFO [o.n.c.c.c.m.RaftMembershipManager] Target membership: []
2020-12-23 09:36:49.722+0000 INFO [o.n.c.c.c.m.RaftMembershipManager] Not safe to remove members [MemberId{2a8e7ebb}, MemberId{ba2034ac}] because it would reduce the number of voting members below the expected cluster size of 3. Voting members: [MemberId{2a8e7ebb}, MemberId{ba2034ac}, MemberId{b39c99b8}]
2020-12-23 09:36:50.160+0000 INFO [o.n.c.n.Server] raft-server: bound to 174.138.22.22:7000
2020-12-23 09:46:49.456+0000 WARN [c.n.c.d.SslHazelcastCoreTopologyService] The server has not been able to connect in a timely fashion to the cluster. Please consult the logs for more details. Rebooting the server may solve the problem.

I'm not seeing where the cluster actually forms? Is the graph empty on all members?

Yes there is no database yet. As i was thinking to create database via browser.

cypher-shell
command locally also says that connection refused.

and the other 2 cluster members were also started when this instance was started? ???? it might be best to include all logs\ from all 3 members of the cluster

No need, my client just cancelled the project.

Hi @hrehman200,
Sad that the project got cancelled, but try this docker setup