Bug 1860832 - After upgrading to 4.3.28 customer observed mistmatch between node labels (region=CanadaCentral) and pv affinity (region=canadacentral) [NEEDINFO]
Summary: After upgrading to 4.3.28 customer observed mistmatch between node labels (re...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.3.z
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.3.z
Assignee: Alberto
QA Contact: sunzhaohua
URL:
Whiteboard:
: 1866312 (view as bug list)
Depends On: 1860830
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-27 08:21 UTC by Alberto
Modified: 2020-09-18 08:31 UTC (History)
20 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1860830
Environment:
Last Closed: 2020-09-09 16:24:42 UTC
Target Upstream Version:
ansverma: needinfo? (agarcial)
erich: needinfo? (agarcial)


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github openshift origin pull 25327 None closed Bug 1860832: Ensure Azure availability zone is always in lower cases 2020-09-21 23:10:17 UTC
Red Hat Product Errata RHBA-2020:3457 None None None 2020-09-09 16:24:52 UTC

Comment 4 Jan Safranek 2020-08-10 11:12:12 UTC
*** Bug 1866312 has been marked as a duplicate of this bug. ***

Comment 7 Abu Davis 2020-08-17 09:03:21 UTC
Could someone please confirm if the bugfix has made it to 4.3.31?

Comment 13 sunzhaohua 2020-08-21 10:08:58 UTC
Verified failed
clusterversion: 4.3.0-0.nightly-2020-08-20-225757

Node label is still upper case: failure-domain.beta.kubernetes.io/region=CanadaCentra
pod are in pending status
I checked this pr has been included in https://openshift-release.apps.ci.l2s4.p1.openshiftapps.com/releasestream/4.3.0-0.nightly/release/4.3.0-0.nightly-2020-08-17-103456

$ oc get node --show-labels | grep failure-domain
zhsunazure821-6x7jf-master-0                     Ready    master   53m   v1.16.2+295f6e6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_D8s_v3,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=CanadaCentral,failure-domain.beta.kubernetes.io/zone=0,kubernetes.io/arch=amd64,kubernetes.io/hostname=zhsunazure821-6x7jf-master-0,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
zhsunazure821-6x7jf-master-1                     Ready    master   53m   v1.16.2+295f6e6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_D8s_v3,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=CanadaCentral,failure-domain.beta.kubernetes.io/zone=0,kubernetes.io/arch=amd64,kubernetes.io/hostname=zhsunazure821-6x7jf-master-1,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
zhsunazure821-6x7jf-master-2                     Ready    master   53m   v1.16.2+295f6e6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_D8s_v3,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=CanadaCentral,failure-domain.beta.kubernetes.io/zone=0,kubernetes.io/arch=amd64,kubernetes.io/hostname=zhsunazure821-6x7jf-master-2,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
zhsunazure821-6x7jf-worker-canadacentral-btjxl   Ready    worker   41m   v1.16.2+295f6e6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_D2s_v3,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=CanadaCentral,failure-domain.beta.kubernetes.io/zone=0,kubernetes.io/arch=amd64,kubernetes.io/hostname=zhsunazure821-6x7jf-worker-canadacentral-btjxl,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos
zhsunazure821-6x7jf-worker-canadacentral-jrd5r   Ready    worker   41m   v1.16.2+295f6e6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_D2s_v3,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=CanadaCentral,failure-domain.beta.kubernetes.io/zone=0,kubernetes.io/arch=amd64,kubernetes.io/hostname=zhsunazure821-6x7jf-worker-canadacentral-jrd5r,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos

$ oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM          STORAGECLASS      REASON   AGE
pvc-9e470188-0c0e-4c47-b76c-21a1b33ff5c9   1Gi        RWO            Delete           Bound    default/pvc1   managed-premium            28m
$ oc get pvc
NAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
pvc1   Bound    pvc-9e470188-0c0e-4c47-b76c-21a1b33ff5c9   1Gi        RWO            managed-premium   29m
$ oc get po
NAME          READY   STATUS    RESTARTS   AGE
task-pv-pod   0/1     Pending   0          29m
$ oc describe po | tail
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  Failed to bind volumes: pv "pvc-9e470188-0c0e-4c47-b76c-21a1b33ff5c9" node affinity doesn't match node "zhsunazure821-6x7jf-worker-canadacentral-btjxl": No matching NodeSelectorTerms
  Warning  FailedScheduling  <unknown>  default-scheduler  0/5 nodes are available: 2 node(s) had volume node affinity conflict, 3 node(s) had taints that the pod didn't tolerate.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/5 nodes are available: 2 node(s) had volume node affinity conflict, 3 node(s) had taints that the pod didn't tolerate.

Comment 17 Michael Gugino 2020-09-01 22:35:48 UTC
This is fixed in latest 4.4 if anyone is urgently waiting on the fix, they can upgrade to 4.4.

I'm not sure why this didn't pass QA.  I've looked into the code, it seems it should have passed to me.  I'm not sure how this code actually makes it into the node manager and if the code was actually in the release or not.  Sending to Node team to investigate.

Comment 18 Seth Jennings 2020-09-02 15:58:38 UTC
This is possibly due to the node initially registering with the old kubelet in the base RHCOS before the new osimage is deployed.

Please ensure that the kubelet in the base RHCOS image contains the fix since the node can only label itself once at initial registration time.

Comment 19 sunzhaohua 2020-09-02 19:11:38 UTC
Verified
clusterversion: 4.3.0-0.nightly-2020-09-01-015751

$ oc get po
NAME          READY   STATUS    RESTARTS   AGE
task-pv-pod   1/1     Running   0          2m29s
$ oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM          STORAGECLASS      REASON   AGE
pvc-8a1566b4-a1fd-4a44-b498-c19557870d41   1Gi        RWO            Delete           Bound    default/pvc1   managed-premium            2m32s
$ oc get pvc
NAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
pvc1   Bound    pvc-8a1566b4-a1fd-4a44-b498-c19557870d41   1Gi        RWO            managed-premium   2m51s
$ oc get node --show-labels
NAME                                            STATUS   ROLES    AGE   VERSION           LABELS
zhsun93azure-s967t-master-0                     Ready    master   78m   v1.16.2+7279a4a   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_D8s_v3,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=canadacentral,failure-domain.beta.kubernetes.io/zone=0,kubernetes.io/arch=amd64,kubernetes.io/hostname=zhsun93azure-s967t-master-0,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
zhsun93azure-s967t-master-1                     Ready    master   78m   v1.16.2+7279a4a   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_D8s_v3,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=canadacentral,failure-domain.beta.kubernetes.io/zone=0,kubernetes.io/arch=amd64,kubernetes.io/hostname=zhsun93azure-s967t-master-1,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
zhsun93azure-s967t-master-2                     Ready    master   78m   v1.16.2+7279a4a   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_D8s_v3,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=canadacentral,failure-domain.beta.kubernetes.io/zone=0,kubernetes.io/arch=amd64,kubernetes.io/hostname=zhsun93azure-s967t-master-2,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
zhsun93azure-s967t-worker-canadacentral-ghszk   Ready    worker   67m   v1.16.2+7279a4a   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_D2s_v3,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=canadacentral,failure-domain.beta.kubernetes.io/zone=0,kubernetes.io/arch=amd64,kubernetes.io/hostname=zhsun93azure-s967t-worker-canadacentral-ghszk,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos
zhsun93azure-s967t-worker-canadacentral-xhhfk   Ready    worker   65m   v1.16.2+7279a4a   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=Standard_D2s_v3,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=canadacentral,failure-domain.beta.kubernetes.io/zone=0,kubernetes.io/arch=amd64,kubernetes.io/hostname=zhsun93azure-s967t-worker-canadacentral-xhhfk,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos

Comment 21 errata-xmlrpc 2020-09-09 16:24:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.3.35 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3457

Comment 22 Colin Walters 2020-09-09 17:21:01 UTC
> This is possibly due to the node initially registering with the old kubelet in the base RHCOS before the new osimage is deployed.

No, kubelet should't start until we've updated.  https://github.com/openshift/machine-config-operator/blob/master/docs/OSUpgrades.md

That said there were bugs in 4.3 which if we encountered an error during that initial upgrade/pivot we would still stumble on and start kubelet anyways which has since been fixed.
See e.g. https://github.com/openshift/machine-config-operator/commit/75dbab9c54c6cb3470075af1da1b139ecea02d38

Comment 23 Michael Gugino 2020-09-09 18:24:09 UTC
(In reply to Colin Walters from comment #22)
> > This is possibly due to the node initially registering with the old kubelet in the base RHCOS before the new osimage is deployed.
> 
> No, kubelet should't start until we've updated. 
> https://github.com/openshift/machine-config-operator/blob/master/docs/
> OSUpgrades.md
> 
> That said there were bugs in 4.3 which if we encountered an error during
> that initial upgrade/pivot we would still stumble on and start kubelet
> anyways which has since been fixed.
> See e.g.
> https://github.com/openshift/machine-config-operator/commit/
> 75dbab9c54c6cb3470075af1da1b139ecea02d38

Okay, looks like that particular fix only landed in 4.5 and newer.

So, for most users, latest version of 4.3 or newer should be unaffected.  However, in some edge cases in releases older than 4.5, for clusters originally installed with an affected version of 4.3 or below, the labels may have to be manually changed on any hosts that are interrupted during first boot.

Some clusters may need to update their boot image manually to permanently fix this solution as that functionality does not exist today.  In a future release, we hope to assist with automatically updating boot images for each support platform.  I'm unsure how this might be achieved today, however.  Unless the MCO/RHCOS team can sort out how to make this change on each platform, then any current and future cases affected by this will still need to manually change the labels.


Note You need to log in before you can comment on or make changes to this bug.