Description of problem:
when installing metrics in a cluster that has failure domains, the cassandra pods should be failure domain aware and distributing themselves across those domains.
The conventional label that identifies the failure domains are:
These labels are set automatically when running in a cloud environment, but can be manually set also for on premise deployments.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. deploy a cassandra cluster in a multi-AZ OCP cluster
the multiple cassandra rc ignore the failure domain.
They may actually accidentally distribute themselves correctly, but there is no directive for creating an anti affinity behavior based on the failure domain.
If stateful set was used to deploy the cassandra cluster this behavior would be easier to configure.
Is this the same as bug 1563853? If so, can we close this ticket?
John, it is not the same.
1564939 is asking for two pods to not be started in the same failure domain.
1563853 is asking for two pods to not be started on the same node