Causal clustering leverages the Raft protocol, which enforces majority quorum for commits, so not all cluster members must be directly involved in a commit operation. This can allow for the case of nodes that are slowed for some reason and unable to keep the same pace, so they will not drag down the speed of committing. So for a 3 node cluster, only 2 must be be involved for a commit to happen. For a 5 node cluster, only 3 must be involved for a commit to happen (allowing the possibility of 2 nodes to be slow or not caught up).
So we aren't directly impacted by that since we do not require all nodes to be involved in a commit, some can be behind, and the number allowed is determined by the cluster size, since that dictates majority.
In the case of failed nodes, provided quorum is maintained and you're still above the min cluster size at runtime, the failed node will be voted out and the cluster will scale down in size, usually resulting in a new number for majority quorum. No impact to service.
And as arnleiftordal mentioned, we do use bookmarks for implementing causal chaining, when you absolutely know a read transaction must depend on being caught up with a previous write transaction.
If you're reading your own writes in a write tx, then you're executing on the leader and guaranteed to see the latest data anyway.
If you're executing a separate read transaction (which will route to a non-leader node) following a write transaction, provided you obtained a bookmark from the write tx and passed it when executing the read query (and this is done for you automatically if the read and write tx are executed within the same session), then whichever (follower or read replica) node receives the transaction must wait until their transactions are caught up with the bookmark before executing. So causal chaining is available, but it is client-driven, not automatic for all transactions.