You may want to review our causal clustering docs if you haven't already. A causal cluster only has a single leader at a time, as a result of an election among the core nodes of the cluster (this uses the raft protocol). The leader is the only node in the cluster allowed to process write queries. Transaction commits occur via the raft protocol, which propagates them across the cluster from the leader to the other nodes in the cluster.
The routing is only used when using the bolt+routing protocol, and routing decisions happen at the driver level on the client, not on the server. The server does provide the routing table to the client/driver on initial connection, and updates the driver with routing table changes, but each client driver handles the actual routing (here's the documentation, we'll be surfacing relevant parts of this to the causal clustering docs so it's more visible).
In the client code, if a transaction or session is explicitly set as a READ transaction it will be routed to one of the follower or read replica nodes, and this covers the horizontal scaling for reads. Otherwise (by default or explicitly set as a WRITE transaction) the query will be routed to the leader.
The core members participate in the raft protocol, receiving graph updates from the leader, and also participating in leader elections if the current leader goes offline or needs to step down. So core nodes are all capable of becoming the leader should they win a raft leader election. This can allow the cluster to gracefully handle a number of node/leader failures, allowing writes to continue if a quorum of core nodes is maintained, but only allowing reads in the event quorum is lost.