Bug 2297454
| Summary: | CephFS additional data pools: duplicated entries at the CR level. | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Alfonso MartÃnez <almartin> |
| Component: | rook | Assignee: | Santosh Pillai <sapillai> |
| Status: | CLOSED ERRATA | QA Contact: | Nagendra Reddy <nagreddy> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.16 | CC: | ebenahar, mparida, odf-bz-bot, sheggodu, tnielsen |
| Target Milestone: | --- | ||
| Target Release: | ODF 4.17.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | 4.17.0-105 | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2024-10-30 14:28:53 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.17.0 Security, Enhancement, & Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:8676 |
Description of problem (please be detailed as possible and provide log snippests): It's possible to add pool entries with the same name that results in one single pool at the ceph level but 2 (or more) entries at the StorageCluster CR level. Version of all relevant components (if applicable): ocs-operator 4.16 Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? Yes. Can this issue reproduce from the UI? No. If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Edit StorageCluster CR by adding a cephfs data pool: spec: managedResources: cephFilesystems: additionalDataPools: - name: my-pool compressionMode: 'aggressive' replicated: size: 2 2. Then check that the pool is created: ceph osd pool ls detail 3. Edit again the StorageCluster CR adding a new entry with the same name: spec: managedResources: cephFilesystems: additionalDataPools: - name: my-pool compressionMode: 'aggressive' replicated: size: 2 - name: my-pool compressionMode: 'none' replicated: size: 3 Actual results: The previously existing pool has replica size and compression_mode updated (at the ceph level) according to the last entry (but still 1 single pool meanwhile in the CR there are 2 entries). Expected results: It shouldn't be possible to add a new entry with existing pool with the same name: avoid duplicated entries (and entries that become outdated thus making problematic to understand which entry is the correct one).