Bug 2297454 - CephFS additional data pools: duplicated entries at the CR level.
Summary: CephFS additional data pools: duplicated entries at the CR level.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: rook
Version: 4.16
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ODF 4.17.0
Assignee: Santosh Pillai
QA Contact: Nagendra Reddy
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-07-12 07:55 UTC by Alfonso Martínez
Modified: 2024-10-30 14:28 UTC (History)
5 users (show)

Fixed In Version: 4.17.0-105
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-10-30 14:28:53 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage rook pull 730 0 None open Bug 2297454 : core: check for duplicate ceph fs pool names 2024-09-17 04:27:31 UTC
Github rook rook pull 14653 0 None open core: check for duplicate ceph fs pool names 2024-08-27 05:16:28 UTC
Red Hat Issue Tracker OCSBZM-8696 0 None None None 2024-07-16 19:54:05 UTC
Red Hat Product Errata RHSA-2024:8676 0 None None None 2024-10-30 14:28:57 UTC

Description Alfonso Martínez 2024-07-12 07:55:31 UTC
Description of problem (please be detailed as possible and provide log
snippests):
It's possible to add pool entries with the same name that results in one single pool at the ceph level but 2 (or more) entries at the StorageCluster CR level.

Version of all relevant components (if applicable):
ocs-operator 4.16

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
Yes.

Can this issue reproduce from the UI?
No.

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Edit StorageCluster CR by adding a cephfs data pool:
spec:
  managedResources:
    cephFilesystems:
      additionalDataPools:
      - name: my-pool
        compressionMode: 'aggressive'
        replicated:
          size: 2
2. Then check that the pool is created:
ceph osd pool ls detail
3. Edit again the StorageCluster CR adding a new entry with the same name:
spec:
  managedResources:
    cephFilesystems:
      additionalDataPools:
      - name: my-pool
        compressionMode: 'aggressive'
        replicated:
          size: 2
      - name: my-pool
        compressionMode: 'none'
        replicated:
          size: 3


Actual results:
The previously existing pool has replica size and compression_mode updated (at the ceph level) according to the last entry (but still 1 single pool meanwhile in the CR there are 2 entries).

Expected results:
It shouldn't be possible to add a new entry with existing pool with the same name:
avoid duplicated entries (and entries that become outdated thus making problematic to understand which entry is the correct one).

Comment 11 errata-xmlrpc 2024-10-30 14:28:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.17.0 Security, Enhancement, & Bug Fix Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:8676


Note You need to log in before you can comment on or make changes to this bug.