Bug 1928045
| Summary: | N+1 scaling Info message says "single zone" even if the nodes are spread across 2 or 0 zones | ||||||
|---|---|---|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Neha Berry <nberry> | ||||
| Component: | Console Storage Plugin | Assignee: | Ankush Behl <anbehl> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Neha Berry <nberry> | ||||
| Severity: | low | Docs Contact: | |||||
| Priority: | medium | ||||||
| Version: | 4.7 | CC: | afrahman, aos-bugs, dwalveka, nibalach, nthomas, ocs-bugs, olakra, ygalanti | ||||
| Target Milestone: | --- | ||||||
| Target Release: | 4.8.0 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2021-07-27 22:44:18 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
sounds good to me! Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438 |
Created attachment 1756568 [details] Create Storage class page 3 Info message for N+1 scaling Description of problem: =============================== To test N+1 scaling test scnearios, On BM or VM LSO cluster, used dummy zone labels and added 2 nodes to zone-a and 3rd node to zone-b. Total zone = 2 for 3 nodes LSO wizard-> Create Storage Class (3rd page) Message says: When all the nodes in a selected storageclass are in a single zone the cluster will be using a host based failure domain Issue: We have two zones, so all nodes are not in a "single zone", rather are distributed across 2 zones. Change request: Please modify the statement to a generalised one, so that it can stand true for any config with less than 3 zones (i.e zone count 0, 1, 2) Version-Release number of selected component (if applicable): ================================================================ OCP = 4.7.0-0.nightly-2021-02-09-224509 OCS = 4.7.0-257.ci How reproducible: =================== Always Steps to Reproduce: ========================= 1. Configure 2 zones on an on-prem cluster ------------------------------- e.g. compute-0 and compute-1 on zone-a oc label node compute-0 failure-domain.beta.kubernetes.io/zone=a topology.kubernetes.io/zone=a oc label node compute-1 failure-domain.beta.kubernetes.io/zone=a topology.kubernetes.io/zone=a compute-2 on zone-b -------------------------- oc label node compute-2 failure-domain.beta.kubernetes.io/zone=b topology.kubernetes.io/zone=b 2. Use OCP 4.7 + OCS 4.7 LSO wizard to install OCS 3. In Create Storage class (Page 3) and in "review and Create (Page 5), check the Info message about FD being set to host. Actual results: ================== We get this message "nodes in a selected storageclass are in a single zone" even if zone count is 0,1,2 . Expected results: ====================== Either the statement should change based on zone count OR generalised the statement (My suggestion, please review ): When all the nodes in a selected storageclass are spread across less than 3 zones, the cluster will be using a host based failure domain Please get it reviewed from relevant stakeholders :) Additional info: