Bug 1915807 - Arbiter: OCS Install failed when used label = topology.kubernetes.io/zone instead of deprecated failureDomain label
Summary: Arbiter: OCS Install failed when used label = topology.kubernetes.io/zone ins...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.7
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: OCS 4.7.0
Assignee: Raghavendra Talur
QA Contact: Pratik Surve
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-01-13 13:13 UTC by Neha Berry
Modified: 2021-06-01 08:46 UTC (History)
7 users (show)

Fixed In Version: 4.7.0-262.ci
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-19 09:18:01 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:2041 0 None None None 2021-05-19 09:18:40 UTC

Description Neha Berry 2021-01-13 13:13:45 UTC
Description of problem (please be detailed as possible and provide log
snippests):
------------------------------------------------------------------
For Arbiter mode install from UI, the UI accepts any or both of the following two labels for participating nodes:

>>failure-domain.beta.kubernetes.io/zone=a  --> deprecated
>>topology.kubernetes.io/zone=c

Hence, when used the new  tolopolgy label instead of the deprecated failure-domain.beta.kubernetes , the nodes were properly selected in the UI.

But on clicking Create, it is seen that even though arbiter is enabled, the install failed and no ceph pods were created. The storagecluster yaml had failureDomain set incorrectly to rack instead of zone

 failureDomain: rack

spec:
  arbiter:
      enable: true
  nodeTopologies:
      arbiterLocation: c


Already Had a discussion with Raghavendra and he confirmed that the fix is still on the way and as of now, ocs-operator might be looking for the old deprecated label for zone. Hence raised a bug to track the effort.

Log from rook operator
-------------------------

2021-01-13 13:06:29.347448 E | ceph-cluster-controller: failed to reconcile. failed to reconcile cluster "ocs-storagecluster-cephcluster": failed to configure local ceph cluster: failed to perform validation before cluster creation: expecting exactly three zones for the stretch cluster, but found 5


Version of all relevant components (if applicable):
=========================================================
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.0-0.nightly-2021-01-12-203716   True        False         5h52m   Cluster version is 4.7.0-0.nightly-2021-01-12-203716
[nberry@localhost jan13-vmw-dr]$ oc get csv -n openshift-storage
NAME                         DISPLAY                       VERSION        REPLACES   PHASE
ocs-operator.v4.7.0-230.ci   OpenShift Container Storage   4.7.0-230.ci              Succeeded



Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
================================================================
Yes arbiter install didnt succeed and all docs and demos suggest to use the topology label.

Is there any workaround available to the best of your knowledge?
==============================================================
Use deprecated failure-domain label

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
---=========================================================================
4

Can this issue reproducible?
==============================
Yes

Can this issue reproduce from the UI?
======================================
yes

If this is a regression, please provide more details to justify this:
==============================================================
New feature

Steps to Reproduce:
=====================
1. Install and OCP cluster on vmware with attached LSO devices. 2 W each on 2 zones and third zone can have only one master.

2. Label all W nodes in zone-a and zone-b with correct topology.kubernetes.io/zone label. label the master node in third rac
2. Follow UI flow to install OCS
3. Check the pods 




Actual results:
=================
The ceph pods failed to get created and storagecluster is stuck in Progressing state. The nodes are added with rack label too.

Expected results:
--------------------
The new "topology.kubernetes.io/zone" should be accepted by ocs-operator and install should succeed.

Comment 4 Mudit Agarwal 2021-01-28 05:11:29 UTC
Hi Neha, Sorry for late response. Talur is working on this and we have a WIP PR.

Comment 5 Jose A. Rivera 2021-01-29 15:37:58 UTC
This seems like a blocker for the arbiter feature as a whole, so flagging this as a potential blocker.

Comment 13 errata-xmlrpc 2021-05-19 09:18:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Container Storage 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2041


Note You need to log in before you can comment on or make changes to this bug.