Bug 2260818 - Additional failure domain is shown in Configure Ceph Monior UI tab
Summary: Additional failure domain is shown in Configure Ceph Monior UI tab
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: management-console
Version: 4.15
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.15.0
Assignee: Nishanth Thomas
QA Contact: Joy John Pinto
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-01-29 06:08 UTC by Joy John Pinto
Modified: 2024-03-19 15:32 UTC (History)
5 users (show)

Fixed In Version: 4.15.0-130
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-03-19 15:32:24 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage odf-console pull 1202 0 None open [release-4.15] Bug 2260818: Updates selector for failure domain values 2024-02-01 09:29:37 UTC
Github red-hat-storage odf-console pull 1203 0 None open [release-4.15-compatibility] Bug 2260818: Updates selector for failure domain values 2024-02-01 09:29:43 UTC
Red Hat Product Errata RHSA-2024:1383 0 None None None 2024-03-19 15:32:26 UTC

Description Joy John Pinto 2024-01-29 06:08:42 UTC
Created attachment 2011331 [details]
ceph_mon.png

Description of problem (please be detailed as possible and provide log
snippests):
Additional failure domain is shown in Configure Ceph Monior UI tab

On a six node cluster with six failure domains, Upon configuring ceph mons additional failure domain is shown. Refer ceph_mon.png


Version of all relevant components (if applicable):
OCP 4.15
ODF 4.15.0-120

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
NA

Is there any workaround available to the best of your knowledge?
NA

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
Yes

If this is a regression, please provide more details to justify this:
NA

Steps to Reproduce:
1. Install OCP 4.15 and ODF 4.15
2. Create a 5 or more nodes rack/host based failure domain cluster
3. Get number of zones/failure domains using command "[jopinto@jopinto 5mon]$ oc get nodes -o jsonpath='{.items[*].metadata.labels.topology\.kubernetes\.io/zone}' | tr ' ' '\n' | sort -u | wc -l" and wait for the CephMonLowNumber alert to be triggered
4. When CephMonLowNumber alert is triggered, Go to configure modal
5. In UI you can observe number of failure domains as 7 whereas it should be shown as 6


Actual results:
In UI number of failure domains should be 6, for 6 zone/failure domain cluster


Expected results:
In UI you can observe number of failure domains as 7 whereas it should be shown as 6


Additional info:

Comment 6 Joy John Pinto 2024-02-06 06:15:26 UTC
Verified with OCP 4.15 and ODF 4.15.0-130

Installed OCP and ODF cluster with 6 failure domains, In configure modal it displays "Node failure domains" as 6 which is valid value (Refer attachment 'ceph_mon_veriifcation.png')

(venv) [jopinto@jopinto 5fd]$ oc get nodes -o jsonpath='{.items[*].metadata.labels.topology\.rook\.io/rack}' | tr ' ' '\n' | sort -u | wc -l
6

Comment 8 errata-xmlrpc 2024-03-19 15:32:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:1383


Note You need to log in before you can comment on or make changes to this bug.