Description of problem (please be detailed as possible and provide log snippests): It's possible to add pool entries with the same name that results in one single pool at the ceph level but 2 (or more) entries at the StorageCluster CR level. Version of all relevant components (if applicable): ocs-operator 4.16 Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? Yes. Can this issue reproduce from the UI? No. If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Edit StorageCluster CR by adding a cephfs data pool: spec: managedResources: cephFilesystems: additionalDataPools: - name: my-pool compressionMode: 'aggressive' replicated: size: 2 2. Then check that the pool is created: ceph osd pool ls detail 3. Edit again the StorageCluster CR adding a new entry with the same name: spec: managedResources: cephFilesystems: additionalDataPools: - name: my-pool compressionMode: 'aggressive' replicated: size: 2 - name: my-pool compressionMode: 'none' replicated: size: 3 Actual results: The previously existing pool has replica size and compression_mode updated (at the ceph level) according to the last entry (but still 1 single pool meanwhile in the CR there are 2 entries). Expected results: It shouldn't be possible to add a new entry with existing pool with the same name: avoid duplicated entries (and entries that become outdated thus making problematic to understand which entry is the correct one).
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.17.0 Security, Enhancement, & Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:8676