Has same "Failed to load" errors as Node1 Then akka / raft: 2020-06-23 15:56:20.220+0000 INFO [c.n.c.c.IdentityModule] Generated new id: MemberId{7d912ecc} (7d912ecc-0783-4e8d-bdd9-f8dde4bc9af7) 2020-06-23 15:56:20.945+0000 INFO [o.n.s.c.SslPolicyLoader] Loaded SSL policy 'BOLT' = SslPolicy{keyCertChain=Subject: CN=54.175.175.43, Issuer: CN=54.175.175.43, ciphers=null, tlsVersions=[TLSv1.2], clientAuth=NONE} 2020-06-23 15:56:20.946+0000 INFO [o.n.s.c.SslPolicyLoader] Loaded SSL policy 'HTTPS' = SslPolicy{keyCertChain=Subject: CN=54.175.175.43, Issuer: CN=54.175.175.43, ciphers=null, tlsVersions=[TLSv1.2], clientAuth=NONE} 2020-06-23 15:56:20.946+0000 INFO [o.n.s.c.SslPolicyLoader] Loaded SSL policy 'CLUSTER' = SslPolicy{keyCertChain=Subject: CN=54.175.175.43, Issuer: CN=54.175.175.43, ciphers=null, tlsVersions=[TLSv1.2], clientAuth=REQUIRE} 2020-06-23 15:56:22.567+0000 INFO [c.n.c.c.TransactionBackupServiceProvider] Binding backup service on address localhost:6362 2020-06-23 15:56:24.479+0000 INFO [c.n.c.c.s.ClusterStateMigrator] Persisted cluster state version is: null 2020-06-23 15:56:24.490+0000 INFO [c.n.c.c.s.ClusterStateMigrator] Created a version storage for version ClusterStateVersion{major=1, minor=0} 2020-06-23 15:56:24.508+0000 INFO [a.ApocConfig] successfully registered ApocConfig for @Context 2020-06-23 15:56:24.656+0000 INFO [a.Pools] successfully registered Pools for @Context 2020-06-23 15:56:24.665+0000 INFO [a.ApocConfig] from system properties: NEO4J_CONF=/etc/neo4j 2020-06-23 15:56:24.666+0000 INFO [a.ApocConfig] system property NEO4J_CONF set to /etc/neo4j 2020-06-23 15:56:24.666+0000 INFO [a.ApocConfig] loading apoc meta config from jar:file:/var/lib/neo4j/plugins/apoc-4.0.0.13-all.jar!/apoc-config.xml 2020-06-23 15:56:25.135+0000 INFO [a.ApocConfig] setting from neo4j.conf: apoc.export.file.enabled=false 2020-06-23 15:56:25.136+0000 INFO [a.ApocConfig] setting from neo4j.conf: apoc.ttl.enabled=false 2020-06-23 15:56:25.139+0000 INFO [a.ApocConfig] setting from neo4j.conf: apoc.import.file.enabled=true 2020-06-23 15:56:25.143+0000 INFO [a.ApocConfig] setting from neo4j.conf: apoc.trigger.enabled=false 2020-06-23 15:56:25.155+0000 INFO [a.ApocConfig] setting from neo4j.conf: apoc.ttl.schedule=PT1M 2020-06-23 15:56:25.158+0000 INFO [a.ApocConfig] setting from neo4j.conf: apoc.import.file.use_neo4j_config=true 2020-06-23 15:56:25.159+0000 INFO [a.ApocConfig] setting from neo4j.conf: apoc.uuid.enabled=false 2020-06-23 15:56:25.163+0000 INFO [a.ApocConfig] setting from neo4j.conf: apoc.ttl.limit=1000 2020-06-23 15:56:25.546+0000 INFO [o.n.b.BoltServer] Bolt server loaded 2020-06-23 15:56:27.960+0000 WARN [a.e.DummyClassForStringSources] Using serializer [com.neo4j.causalclustering.discovery.akka.marshal.UniqueAddressSerializer] for message [akka.cluster.UniqueAddress]. Note that this serializer is not implemented by Akka. It's not recommended to replace serializers for messages provided by Akka. 2020-06-23 15:56:28.628+0000 INFO [c.n.c.n.Server] raft-server: bound to '10.0.0.46:7000' with transport 'EpollServerSocketChannel' 2020-06-23 15:56:28.669+0000 INFO [c.n.c.n.Server] catchup-server: bound to '10.0.0.46:6000' with transport 'EpollServerSocketChannel' 2020-06-23 15:56:28.684+0000 INFO [c.n.c.n.Server] backup-server: bound to '127.0.0.1:6362' with transport 'EpollServerSocketChannel' 2020-06-23 15:56:28.724+0000 WARN [a.s.Materializer] [outbound connection to [akka://cc-discovery-actor-system@node1.neo4j:5000], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(node1.neo4j:5000,None,List(),Some(10000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2020-06-23 15:56:28.731+0000 WARN [a.s.s.RestartWithBackoffFlow] Restarting graph due to failure. stack_trace: (akka.stream.StreamTcpException: Tcp command [Connect(node1.neo4j:5000,None,List(),Some(10000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused) 2020-06-23 15:56:28.733+0000 WARN [a.s.Materializer] [outbound connection to [akka://cc-discovery-actor-system@node1.neo4j:5000], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(node1.neo4j:5000,None,List(),Some(10000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2020-06-23 15:56:28.733+0000 WARN [a.s.s.RestartWithBackoffFlow] Restarting graph due to failure. stack_trace: (akka.stream.StreamTcpException: Tcp command [Connect(node1.neo4j:5000,None,List(),Some(10000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused) 2020-06-23 15:56:28.748+0000 INFO [c.n.d.ClusteredDbmsReconciler] Database system is requested to transition from EnterpriseDatabaseState{databaseId=DatabaseId{00000000[system]}, operatorState=INITIAL, failed=false} to EnterpriseDatabaseState{databaseId=DatabaseId{00000000[system]}, operatorState=STARTED, failed=false} 2020-06-23 15:56:28.769+0000 INFO [c.n.c.c.CoreDatabaseManager] Creating 'system' database. 2020-06-23 15:56:29.129+0000 INFO [o.n.g.f.EditionLocksFactories] [system] Locking implementation 'forseti' selected. 2020-06-23 15:56:29.151+0000 INFO [o.n.k.i.f.StatementLocksFactorySelector] [system] No services implementing StatementLocksFactory found. Using SimpleStatementLocksFactory 2020-06-23 15:56:29.199+0000 INFO [c.n.c.u.s.TypicallyConnectToRandomReadReplicaStrategy] [system] Using upstream selection strategy typically-connect-to-random-read-replica 2020-06-23 15:56:29.328+0000 INFO [c.n.c.c.CoreDatabaseManager] Starting 'system' database. 2020-06-23 15:56:29.368+0000 INFO [c.n.c.c.s.s.DurableStateStorage] [system] term state restored, up to ordinal 0 2020-06-23 15:56:29.378+0000 INFO [c.n.c.c.s.s.DurableStateStorage] [system] vote state restored, up to ordinal -1 2020-06-23 15:56:29.385+0000 INFO [c.n.c.c.s.s.DurableStateStorage] [system] membership state restored, up to ordinal -1 2020-06-23 15:56:29.397+0000 INFO [c.n.c.c.s.s.DurableStateStorage] [system] lease state restored, up to ordinal -1 2020-06-23 15:56:29.403+0000 INFO [c.n.c.c.s.s.DurableStateStorage] [system] session-tracker state restored, up to ordinal -1 2020-06-23 15:56:29.407+0000 INFO [c.n.c.c.s.s.DurableStateStorage] [system] last-flushed state restored, up to ordinal -1 2020-06-23 15:56:29.465+0000 INFO [c.n.c.c.c.l.s.SegmentedRaftLog] [system] log started with recovered state State{prevIndex=-1, prevTerm=-1, appendIndex=-1} 2020-06-23 15:56:29.479+0000 INFO [c.n.c.c.c.m.RaftMembershipManager] [system] Membership state before recovery: RaftMembershipState{committed=null, appended=null, ordinal=-1} 2020-06-23 15:56:29.484+0000 INFO [c.n.c.c.c.m.RaftMembershipManager] [system] Recovering from: -1 to: -1 2020-06-23 15:56:29.487+0000 INFO [c.n.c.c.c.m.RaftMembershipManager] [system] Membership state after recovery: RaftMembershipState{committed=null, appended=null, ordinal=-1} 2020-06-23 15:56:29.489+0000 INFO [c.n.c.c.c.m.RaftMembershipManager] [system] Target membership: [] 2020-06-23 15:56:29.490+0000 WARN [c.n.c.c.c.m.RaftMembershipManager] [system] Target membership [] does not contain a majority of existing raft members []. It is likely an operator removed too many core members too quickly. If not, this issue should be transient. 2020-06-23 15:56:29.645+0000 INFO [c.n.c.d.RaftMonitor] Database 'system' is waiting for a total of 3 core members... 2020-06-23 15:56:29.838+0000 WARN [a.s.Materializer] [outbound connection to [akka://cc-discovery-actor-system@node1.neo4j:5000], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(node1.neo4j:5000,None,List(),Some(10000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2020-06-23 15:56:29.838+0000 WARN [a.s.s.RestartWithBackoffFlow] Restarting graph due to failure. stack_trace: (akka.stream.StreamTcpException: Tcp command [Connect(node1.neo4j:5000,None,List(),Some(10000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused) 2020-06-23 15:56:29.868+0000 WARN [a.s.Materializer] [outbound connection to [akka://cc-discovery-actor-system@node1.neo4j:5000], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(node1.neo4j:5000,None,List(),Some(10000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2020-06-23 15:56:29.869+0000 WARN [a.s.s.RestartWithBackoffFlow] Restarting graph due to failure. stack_trace: (akka.stream.StreamTcpException: Tcp command [Connect(node1.neo4j:5000,None,List(),Some(10000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused) 2020-06-23 15:56:30.152+0000 INFO [c.n.c.d.a.GlobalTopologyState] Core topology for database DatabaseId{00000000} is now: empty No members where lost No new members 2020-06-23 15:56:30.153+0000 WARN [c.n.c.c.c.m.RaftMembershipManager] [system] Target membership [] does not contain a majority of existing raft members []. It is likely an operator removed too many core members too quickly. If not, this issue should be transient. 2020-06-23 15:56:31.959+0000 WARN [a.s.Materializer] [outbound connection to [akka://cc-discovery-actor-system@node1.neo4j:5000], control stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(node1.neo4j:5000,None,List(),Some(10000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2020-06-23 15:56:31.959+0000 WARN [a.s.s.RestartWithBackoffFlow] Restarting graph due to failure. stack_trace: (akka.stream.StreamTcpException: Tcp command [Connect(node1.neo4j:5000,None,List(),Some(10000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused) 2020-06-23 15:56:32.046+0000 WARN [a.s.Materializer] [outbound connection to [akka://cc-discovery-actor-system@node1.neo4j:5000], message stream] Upstream failed, cause: StreamTcpException: Tcp command [Connect(node1.neo4j:5000,None,List(),Some(10000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused 2020-06-23 15:56:32.046+0000 WARN [a.s.s.RestartWithBackoffFlow] Restarting graph due to failure. stack_trace: (akka.stream.StreamTcpException: Tcp command [Connect(node1.neo4j:5000,None,List(),Some(10000 milliseconds),true)] failed because of java.net.ConnectException: Connection refused) 2020-06-23 15:56:33.168+0000 INFO [c.n.c.d.a.GlobalTopologyState] Core topology for database DatabaseId{00000000} is now: [MemberId{7d912ecc}] No members where lost New members: MemberId{7d912ecc}=CoreServerInfo{raftServer=10.0.0.46:7000, catchupServer=10.0.0.46:6000, clientConnectorAddresses=bolt://34.201.44.16:7687,http://34.201.44.16:7474,https://34.201.44.16:7473, groups=[], startedDatabaseIds=[DatabaseId{00000000}], refuseToBeLeader=false} 2020-06-23 15:56:33.168+0000 INFO [c.n.c.c.c.m.RaftMembershipManager] [system] Target membership: [MemberId{7d912ecc}] 2020-06-23 15:56:33.169+0000 WARN [c.n.c.c.c.m.RaftMembershipManager] [system] Target membership [MemberId{7d912ecc}] does not contain a majority of existing raft members []. It is likely an operator removed too many core members too quickly. If not, this issue should be transient. 2020-06-23 15:56:36.313+0000 INFO [c.n.c.d.a.GlobalTopologyState] Core topology for database DatabaseId{00000000} is now: [MemberId{a4f45f03}, MemberId{7d912ecc}] No members where lost New members: MemberId{a4f45f03}=CoreServerInfo{raftServer=10.0.2.84:7000, catchupServer=10.0.2.84:6000, clientConnectorAddresses=bolt://34.234.73.125:7687,http://34.234.73.125:7474,https://34.234.73.125:7473, groups=[], startedDatabaseIds=[DatabaseId{00000000}], refuseToBeLeader=false} 2020-06-23 15:56:36.313+0000 INFO [c.n.c.c.c.m.RaftMembershipManager] [system] Target membership: [MemberId{a4f45f03}, MemberId{7d912ecc}] 2020-06-23 15:56:36.314+0000 WARN [c.n.c.c.c.m.RaftMembershipManager] [system] Target membership [MemberId{a4f45f03}, MemberId{7d912ecc}] does not contain a majority of existing raft members []. It is likely an operator removed too many core members too quickly. If not, this issue should be transient. 2020-06-23 15:56:39.702+0000 INFO [c.n.c.d.RaftMonitor] Database 'system' is waiting for a total of 3 core members... 2020-06-23 15:56:42.614+0000 INFO [c.n.c.d.a.GlobalTopologyState] Core topology for database DatabaseId{00000000} is now: [MemberId{a4f45f03}, MemberId{7d912ecc}, MemberId{8b4d3013}] No members where lost New members: MemberId{8b4d3013}=CoreServerInfo{raftServer=10.0.1.222:7000, catchupServer=10.0.1.222:6000, clientConnectorAddresses=bolt://54.92.147.57:7687,http://54.92.147.57:7474,https://54.92.147.57:7473, groups=[], startedDatabaseIds=[DatabaseId{00000000}], refuseToBeLeader=false} 2020-06-23 15:56:42.614+0000 INFO [c.n.c.c.c.m.RaftMembershipManager] [system] Target membership: [MemberId{a4f45f03}, MemberId{7d912ecc}, MemberId{8b4d3013}] 2020-06-23 15:56:42.614+0000 WARN [c.n.c.c.c.m.RaftMembershipManager] [system] Target membership [MemberId{a4f45f03}, MemberId{8b4d3013}, MemberId{7d912ecc}] does not contain a majority of existing raft members []. It is likely an operator removed too many core members too quickly. If not, this issue should be transient. 2020-06-23 15:56:42.616+0000 INFO [c.n.c.d.RaftMonitor] Trying bootstrap using discovery service method 2020-06-23 15:56:43.123+0000 INFO [c.n.c.d.a.GlobalTopologyState] Core topology for database DatabaseId{00000000} is now: [MemberId{a4f45f03}, MemberId{7d912ecc}, MemberId{8b4d3013}] No members where lost No new members 2020-06-23 15:56:43.124+0000 WARN [c.n.c.c.c.m.RaftMembershipManager] [system] Target membership [MemberId{a4f45f03}, MemberId{8b4d3013}, MemberId{7d912ecc}] does not contain a majority of existing raft members []. It is likely an operator removed too many core members too quickly. If not, this issue should be transient. 2020-06-23 15:56:48.308+0000 WARN [a.e.DummyClassForStringSources] Cluster Node [akka://cc-discovery-actor-system@node0.neo4j:5000] - Marking node(s) as UNREACHABLE [Member(address = akka://cc-discovery-actor-system@node1.neo4j:5000, status = Up)]. Node roles [dc-default] 2020-06-23 15:56:48.308+0000 INFO [c.n.c.d.a.GlobalTopologyState] Core topology for database DatabaseId{00000000} is now: [MemberId{a4f45f03}, MemberId{7d912ecc}] Lost members :[MemberId{8b4d3013}] No new members 2020-06-23 15:56:48.308+0000 INFO [c.n.c.c.c.m.RaftMembershipManager] [system] Target membership: [MemberId{a4f45f03}, MemberId{7d912ecc}] 2020-06-23 15:56:48.308+0000 WARN [c.n.c.c.c.m.RaftMembershipManager] [system] Target membership [MemberId{a4f45f03}, MemberId{7d912ecc}] does not contain a majority of existing raft members []. It is likely an operator removed too many core members too quickly. If not, this issue should be transient. 2020-06-23 15:56:52.645+0000 WARN [c.n.c.d.a.c.RaftIdActor] Failed to set RaftId with request: RaftIdSetRequest{raftId=RaftId{00000000}, publisher=MemberId{7d912ecc}, timeout=PT15S, replyTo=Actor[akka://cc-discovery-actor-system/temp/$a]} 2020-06-23 15:56:52.748+0000 INFO [c.n.c.d.RaftMonitor] Temporarily moving system database to force store copy 2020-06-23 15:56:52.749+0000 INFO [c.n.c.c.s.BootstrapSaver] Saving: /var/lib/neo4j/data/databases/system 2020-06-23 15:56:52.751+0000 INFO [c.n.c.c.s.BootstrapSaver] Saving: /var/lib/neo4j/data/transactions/system 2020-06-23 15:56:52.753+0000 INFO [c.n.c.c.s.CoreSnapshotService] [system] Waiting for another raft group member to publish a core state snapshot 2020-06-23 15:56:54.681+0000 INFO [c.n.c.d.a.AkkaCoreTopologyService] Restarting discovery system after probable network partition 2020-06-23 15:57:02.755+0000 INFO [c.n.c.c.s.CoreSnapshotService] [system] Waiting for another raft group member to publish a core state snapshot 2020-06-23 15:57:05.332+0000 WARN [a.a.CoordinatedShutdown] Coordinated shutdown phase [actor-system-terminate] timed out after 10000 milliseconds 2020-06-23 15:57:05.397+0000 ERROR [a.i.TcpListener] Bind failed for TCP channel on endpoint [/10.0.0.46:5000] Address already in use java.net.BindException: Address already in use at java.base/sun.nio.ch.Net.bind0(Native Method) at java.base/sun.nio.ch.Net.bind(Net.java:455) at java.base/sun.nio.ch.Net.bind(Net.java:447) at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227) at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:80) at akka.io.TcpListener.liftedTree1$1(TcpListener.scala:59) at akka.io.TcpListener.(TcpListener.scala:56) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490) at akka.util.Reflect$.instantiate(Reflect.scala:68) at akka.actor.ArgsReflectConstructor.produce(IndirectActorProducer.scala:98) at akka.actor.Props.newActor(Props.scala:212) at akka.actor.ActorCell.newActor(ActorCell.scala:646) at akka.actor.ActorCell.create(ActorCell.scala:672) at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:545) at akka.actor.ActorCell.systemInvoke(ActorCell.scala:567) at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:293) at akka.dispatch.Mailbox.run(Mailbox.scala:228) at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1426) at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020) at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177) 2020-06-23 15:57:05.414+0000 WARN [c.n.c.d.a.AkkaCoreTopologyService] Failed to restart discovery system Failed to bind TCP to [node0.neo4j:5000] due to: Bind failed because of java.net.BindException: Address already in use akka.remote.RemoteTransportException: Failed to bind TCP to [node0.neo4j:5000] due to: Bind failed because of java.net.BindException: Address already in use at akka.remote.artery.tcp.ArteryTcpTransport$$anonfun$3.applyOrElse(ArteryTcpTransport.scala:295) at akka.remote.artery.tcp.ArteryTcpTransport$$anonfun$3.applyOrElse(ArteryTcpTransport.scala:290) at scala.concurrent.Future.$anonfun$recoverWith$1(Future.scala:413) at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:37) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60) at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:73) at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:84) at akka.dispatch.BatchingExecutor.execute(BatchingExecutor.scala:122) at akka.dispatch.BatchingExecutor.execute$(BatchingExecutor.scala:116) at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:83) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68) at scala.concurrent.impl.Promise$DefaultPromise.dispatchOrAddCallback(Promise.scala:312) at scala.concurrent.impl.Promise$DefaultPromise.onComplete(Promise.scala:303) at scala.concurrent.impl.Promise.transformWith(Promise.scala:36) at scala.concurrent.impl.Promise.transformWith$(Promise.scala:34) at scala.concurrent.impl.Promise$DefaultPromise.transformWith(Promise.scala:183) at scala.concurrent.Future.recoverWith(Future.scala:412) at scala.concurrent.Future.recoverWith$(Future.scala:411) at scala.concurrent.impl.Promise$DefaultPromise.recoverWith(Promise.scala:183) at akka.remote.artery.tcp.ArteryTcpTransport.runInboundStreams(ArteryTcpTransport.scala:296) at akka.remote.artery.ArteryTransport.start(ArteryTransport.scala:476) at akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:233) at akka.cluster.ClusterActorRefProvider.init(ClusterActorRefProvider.scala:37) at akka.actor.ActorSystemImpl.liftedTree2$1(ActorSystem.scala:913) at akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:909) at akka.actor.ActorSystemImpl._start(ActorSystem.scala:909) at akka.actor.ActorSystemImpl.start(ActorSystem.scala:931) at akka.actor.ActorSystem$.apply(ActorSystem.scala:258) at akka.actor.ActorSystem$.create(ActorSystem.scala:182) at akka.actor.ActorSystem.create(ActorSystem.scala) at com.neo4j.causalclustering.discovery.akka.system.ActorSystemFactory.createActorSystem(ActorSystemFactory.java:65) at com.neo4j.causalclustering.discovery.akka.system.ActorSystemComponents.(ActorSystemComponents.java:30) at com.neo4j.causalclustering.discovery.akka.system.ActorSystemLifecycle.createClusterActorSystem(ActorSystemLifecycle.java:71) at com.neo4j.causalclustering.discovery.akka.AkkaCoreTopologyService.start0(AkkaCoreTopologyService.java:119) at org.neo4j.kernel.lifecycle.SafeLifecycle.transition(SafeLifecycle.java:124) at org.neo4j.kernel.lifecycle.SafeLifecycle.start(SafeLifecycle.java:138) at com.neo4j.causalclustering.discovery.akka.AkkaCoreTopologyService.doRestart(AkkaCoreTopologyService.java:280) at com.neo4j.causalclustering.discovery.akka.Restarter.restart(Restarter.java:34) at com.neo4j.causalclustering.discovery.akka.AkkaCoreTopologyService.restart(AkkaCoreTopologyService.java:272) at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1426) at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020) at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177) Caused by: akka.stream.impl.io.ConnectionSourceStage$$anon$1$$anon$2: Bind failed because of java.net.BindException: Address already in use The last two stacktraces repeat a great number of times.