Bug 1860906 - Setting for openshift_upgrade_nodes_label ignored during upgrade
Summary: Setting for openshift_upgrade_nodes_label ignored during upgrade
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: ---
: 3.11.z
Assignee: Russell Teague
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-27 11:53 UTC by Robert Heinzmann
Modified: 2024-03-25 16:13 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-16 12:35:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift openshift-ansible pull 12269 0 None closed Bug 1860906: openshift-cluster/upgrades: Allow short nodes names when filtering 2021-01-26 08:41:37 UTC
Github openshift openshift-ansible pull 12271 0 None closed Revert "Bug 1860906: openshift-cluster/upgrades: Allow short nodes names when filtering" 2021-01-26 08:41:37 UTC
Github openshift openshift-ansible pull 12272 0 None closed Bug 1860906: openshift-cluster/upgrades: Allow openshift_kubelet_name_override when filtering 2021-01-26 08:41:37 UTC
Red Hat Bugzilla 1649074 0 high CLOSED Node label `type=upgrade` is ignored when upgrading OCP 2024-03-25 15:09:52 UTC
Red Hat Product Errata RHSA-2020:5363 0 None None None 2020-12-16 12:35:50 UTC

Description Robert Heinzmann 2020-07-27 11:53:40 UTC
Description of problem:

When using openshift_upgrade_nodes_label while running the 3.10 -> 3.11 upgrade playbook (also 3.9 -> 3.10) all nodes are updated, not only the ones with the upgrade label. This happens only if the hostname returns the FQDN (node-1.examle.com) and the OCP nodename is short (node-1).

Version-Release number of the following components:

[quicklab@master-1 ~]$ rpm -q openshift-ansible
openshift-ansible-3.11.200-1.git.0.3f37acb.el7.noarch

[quicklab@master-1 ~]$ rpm -q ansible
ansible-2.9.10-1.el7ae.noarch

[quicklab@master-1 ~]$ ansible --version
ansible 2.9.10
  config file = /home/quicklab/ansible.cfg
  configured module search path = [u'/home/quicklab/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Sep 26 2019, 13:23:47) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

How reproducible:

Always if hostname reported by "hostname" (fqdn) differs from nodename returned by "oc get node" (short).

Steps to Reproduce:
1. Create a test setup where nodes in ocp have short names (e.g. short cloud provider machine names) and hostname command returns fqdn.
2. label nodes to update accordingly (as described in the docs https://docs.openshift.com/container-platform/3.11/upgrading/automated_upgrades.html#customizing-node-upgrades)
3. run the update playbook
4. all nodes are updated instead of matching nodes

Actual results:

All nodes are updated, not only the labeled nodes.

Expected results:

Only labeled nodes are updated.

Additional info:

 - `hostname` = fqdn          (infra-0.example.com)
 - `hostname -f` = fqdn       (infra-0.example.com)
 - `hostnamectl` = fqdn       (infra-0.example.com)
 - `oc get node` = short name (infra-0)

Also see https://bugzilla.redhat.com/show_bug.cgi?id=1649074 which fixed another issue with node names.

Comment 2 Pierre Prinetti 2020-07-30 14:09:43 UTC
Reassigning to openshift-ansible as the playbook does not seem platform-specific

Comment 5 Pierre Prinetti 2020-09-14 11:53:24 UTC
I was pointed to [1] by Tomas Sedovic who worked on this issue at the time. Is it what you're looking for?

[1]: https://github.com/openshift/openshift-ansible/blob/release-3.11/playbooks/openstack/configuration.md#configure-playbooks-to-use-internal-dns

Comment 6 Robert Heinzmann 2020-09-14 11:59:49 UTC
There are actually 2 parts of the story. 

a) Make the hostnames work 
b) making sure the hostname filter does not return an empty list

While having looked into the docs briefly, I believe they address a). I thing we still need a fix / error condition on for b). 

If a user specified a filter and it does not return ANY node, the user should be aware of it and should be required to run the playbook without filter to success (fully being aware of the consequences aka. no batches).

Comment 7 Brenton Leanhardt 2020-09-14 12:17:54 UTC
I agree with you Robert that (b) is a valid bug.  I'll talk with the team to find out the chances of it being prioritized.

Comment 12 Russell Teague 2020-11-10 13:56:58 UTC
I've opened a PR with the suggested fix for additional testing.  Thanks for doing much of the investigative work on this.

Comment 15 Johnny Liu 2020-11-16 10:31:53 UTC
In QE's v3.10->v3.11 upgrade testing, we can not reproduce this issue.
 - `hostname` = short name    (infra-0)
 - `hostname -f` = fqdn       (infra-0.example.com)
 - `hostnamectl` = short name (infra-0)
 - `oc get node` = short name (infra-0)

To reproduce this bug, have to do some manual intervention.
1. echo "<short name>" >/etc/sysconfig/KUBELET_HOSTNAME_OVERRIDE, make sure "nodeName: <short name>" is set in /etc/origin/node/node-config.yaml
2. change hostname to fqdn in /ect/hostname
3. reboot host
4. ensure node get ready.
 - `hostname` = fqdn          (infra-0.example.com)
 - `oc get node` = short name (infra-0)

$ oc get node
NAME                               STATUS    ROLES     AGE       VERSION
jialiu1310master-etcd-nfs-1        Ready     master    4h        v1.10.0+b81c8f8
jialiu1310node-1                   Ready     compute   3h        v1.10.0+b81c8f8
jialiu1310node-registry-router-1   Ready     <none>    3h        v1.10.0+b81c8f8

[root@jialiu1310node-registry-router-1 ~]# hostname
jialiu1310node-registry-router-1.int.1116-o5c.qe.rhcloud.com
[root@jialiu1310node-registry-router-1 ~]# hostname -f
jialiu1310node-registry-router-1.int.1116-o5c.qe.rhcloud.com
[root@jialiu1310node-registry-router-1 ~]# cat /etc/origin/node/node-config.yaml |grep -i nodeName
nodeName: jialiu1310node-registry-router-1
5. append openshift_kubelet_name_override="<short name>" to host line which you are going to upgrade in your inventory file
6. Run playbook with openshift_upgrade_nodes_label="mylable=foo" added in your inventory file to only upgrade the matched nodes.



Run openshift-ansible-3.11.317 build to reproduce this bug:
11-16 15:39:54  TASK [Map labelled nodes to inventory hosts] ***********************************
11-16 15:39:55  skipping: [ci-vm-10-0-149-92.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-149-92.hosted.upshift.rdu2.redhat.com)  => {"ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-149-92.hosted.upshift.rdu2.redhat.com", "skip_reason": "Conditional result was False"}
11-16 15:39:55  skipping: [ci-vm-10-0-149-92.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-150-126.hosted.upshift.rdu2.redhat.com)  => {"ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-150-126.hosted.upshift.rdu2.redhat.com", "skip_reason": "Conditional result was False"}
11-16 15:39:55  skipping: [ci-vm-10-0-149-92.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-149-124.hosted.upshift.rdu2.redhat.com)  => {"ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-149-124.hosted.upshift.rdu2.redhat.com", "skip_reason": "Conditional result was False"}
11-16 15:39:55  
11-16 15:39:55  TASK [Evaluate oo_nodes_to_upgrade] ********************************************
11-16 15:39:55  ok: [ci-vm-10-0-149-92.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-149-92.hosted.upshift.rdu2.redhat.com) => {"add_host": {"groups": ["oo_nodes_to_upgrade"], "host_name": "ci-vm-10-0-149-92.hosted.upshift.rdu2.redhat.com", "host_vars": {}}, "ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-149-92.hosted.upshift.rdu2.redhat.com"}
11-16 15:39:55  ok: [ci-vm-10-0-149-92.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-150-126.hosted.upshift.rdu2.redhat.com) => {"add_host": {"groups": ["oo_nodes_to_upgrade"], "host_name": "ci-vm-10-0-150-126.hosted.upshift.rdu2.redhat.com", "host_vars": {}}, "ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-150-126.hosted.upshift.rdu2.redhat.com"}
11-16 15:39:55  ok: [ci-vm-10-0-149-92.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-149-124.hosted.upshift.rdu2.redhat.com) => {"add_host": {"groups": ["oo_nodes_to_upgrade"], "host_name": "ci-vm-10-0-149-124.hosted.upshift.rdu2.redhat.com", "host_vars": {}}, "ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-149-124.hosted.upshift.rdu2.redhat.com"}


Run the same testing using openshift-ansible-3.11.318, this initial issue is fixed.
11-16 16:18:43  TASK [Map labelled nodes to inventory hosts] ***********************************
11-16 16:18:43  skipping: [ci-vm-10-0-151-98.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-151-98.hosted.upshift.rdu2.redhat.com)  => {"ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-151-98.hosted.upshift.rdu2.redhat.com", "skip_reason": "Conditional result was False"}

11-16 16:18:43  ok: [ci-vm-10-0-151-98.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-150-33.hosted.upshift.rdu2.redhat.com) => {"add_host": {"groups": ["temp_nodes_to_upgrade"], "host_name": "ci-vm-10-0-150-33.hosted.upshift.rdu2.redhat.com", "host_vars": {}}, "ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-150-33.hosted.upshift.rdu2.redhat.com"}
11-16 16:18:43  skipping: [ci-vm-10-0-151-98.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-148-20.hosted.upshift.rdu2.redhat.com)  => {"ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-148-20.hosted.upshift.rdu2.redhat.com", "skip_reason": "Conditional result was False"}
11-16 16:18:43  
11-16 16:18:43  TASK [Fail if temp_nodes_to_upgrade is empty with openshift_upgrade_nodes_label] ***
11-16 16:18:43  skipping: [ci-vm-10-0-151-98.hosted.upshift.rdu2.redhat.com] => {"changed": false, "skip_reason": "Conditional result was False"}
11-16 16:18:43  
11-16 16:18:43  TASK [Evaluate oo_nodes_to_upgrade] ********************************************
11-16 16:18:43  ok: [ci-vm-10-0-151-98.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-150-33.hosted.upshift.rdu2.redhat.com) => {"add_host": {"groups": ["oo_nodes_to_upgrade"], "host_name": "ci-vm-10-0-150-33.hosted.upshift.rdu2.redhat.com", "host_vars": {}}, "ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-150-33.hosted.upshift.rdu2.redhat.com"}


After review this PR, and run some more testing, this PR will introduce a new regression, if host's short name already includes '.', that will make 'openshift_upgrade_node_label' does not work again.
E.g:
[root@jialiu ~]# oc get node
NAME                               STATUS    ROLES     AGE       VERSION
jialiu.310master-etcd-nfs-1        Ready     master    21m       v1.10.0+b81c8f8
jialiu.310node-1                   Ready     compute   7m        v1.10.0+b81c8f8
jialiu.310node-registry-router-1   Ready     <none>    7m        v1.10.0+b81c8f8
[root@jialiu ~]# hostname
jialiu.310master-etcd-nfs-1
[root@jialiu ~]# hostname -f
jialiu.310master-etcd-nfs-1.int.1116-3sm.qe.rhcloud.com


Run the playbook with openshift_upgrade_nodes_label="node-role.kubernetes.io/compute=true" options, it will update *all* nodes.

11-16 18:02:29  TASK [Retrieve list of openshift nodes matching upgrade label] *****************
11-16 18:02:30  ok: [ci-vm-10-0-150-189.hosted.upshift.rdu2.redhat.com] => {"changed": false, "module_results": {"cmd": "/usr/bin/oc get node --selector=node-role.kubernetes.io/compute=true -o json -n default", "results": [{"apiVersion": "v1", "items": [{"apiVersion": "v1", "kind": "Node", "metadata": {"annotations": {"node.openshift.io/md5sum": "1609592c031b10f708646bcd3acfd491", "volumes.kubernetes.io/controller-managed-attach-detach": "true"}, "creationTimestamp": "2020-11-16T09:49:20Z", "labels": {"beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "jialiu.310node-1", "node-role.kubernetes.io/compute": "true", "role": "node"}, "name": "jialiu.310node-1", "namespace": "", "resourceVersion": "5891", "selfLink": "/api/v1/nodes/jialiu.310node-1", "uid": "ffddadb0-27f0-11eb-b51a-fa163e8ff336"}, "spec": {"externalID": "jialiu.310node-1"}, "status": {"addresses": [{"address": "10.0.149.33", "type": "InternalIP"}, {"address": "jialiu.310node-1", "type": "Hostname"}], "allocatable": {"cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "3777852Ki", "pods": "250"}, "capacity": {"cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "3880252Ki", "pods": "250"}, "conditions": [{"lastHeartbeatTime": "2020-11-16T10:02:18Z", "lastTransitionTime": "2020-11-16T09:49:19Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk"}, {"lastHeartbeatTime": "2020-11-16T10:02:18Z", "lastTransitionTime": "2020-11-16T09:49:19Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure"}, {"lastHeartbeatTime": "2020-11-16T10:02:18Z", "lastTransitionTime": "2020-11-16T09:49:19Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure"}, {"lastHeartbeatTime": "2020-11-16T10:02:18Z", "lastTransitionTime": "2020-11-16T09:49:19Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure"}, {"lastHeartbeatTime": "2020-11-16T10:02:18Z", "lastTransitionTime": "2020-11-16T10:01:38Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready"}], "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"names": ["registry.reg-aws.openshift.com:443/openshift3/ose-node@sha256:24410a629ae9947fa8954020d5487b3e3f6fc322a2e3b13501c88ff620d82c9c", "registry.reg-aws.openshift.com:443/openshift3/ose-node:v3.10"], "sizeBytes": 1318296091}, {"names": ["registry.reg-aws.openshift.com:443/openshift3/ose-docker-builder@sha256:ac6fad24d8a69b14393de02b811b32bf5d2922191f12a47832730d3f0c292e6a", "registry.reg-aws.openshift.com:443/openshift3/ose-docker-builder:v3.10.181"], "sizeBytes": 851762637}, {"names": ["registry.reg-aws.openshift.com:443/openshift3/ose-deployer@sha256:e4c89a608bfc56c635965bb4e299ece02996e2d6b4285ba2400429ad8acab692", "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.10.181"], "sizeBytes": 815443997}, {"names": ["docker-registry.default.svc:5000/install-test/nodejs-mongodb-example@sha256:0d3f5c6205d92c8a55f0ac9d72f78a3c44bac534dec78af81f6029036d816276", "docker-registry.default.svc:5000/install-test/nodejs-mongodb-example:latest"], "sizeBytes": 561116639}, {"names": ["docker-registry.default.svc:5000/openshift/nodejs@sha256:fe6d5c86580ad8db28aa5a112709917aa66e7026ae866adeab4b2973cdd6c598"], "sizeBytes": 553781799}, {"names": ["docker-registry.default.svc:5000/openshift/mongodb@sha256:03242ef7890abb74e1d526ec4ce848f9319b5c3ae12bfac55e00e67e8b9793e5"], "sizeBytes": 492058519}, {"names": ["registry.reg-aws.openshift.com:443/openshift3/ose-ansible-service-broker@sha256:014978392cac805b136868b5be2d967e5f52f082daa19b6c990b1c1fd9b35d2e", "registry.reg-aws.openshift.com:443/openshift3/ose-ansible-service-broker:v3.10"], "sizeBytes": 459828482}, {"names": ["registry.reg-aws.openshift.com:443/openshift3/ose-pod@sha256:1ec9bd41b9767663190e579df3204244a9fd90b6b320c22a131a098771c205ef", "registry.reg-aws.openshift.com:443/openshift3/ose-pod:v3.10", "registry.reg-aws.openshift.com:443/openshift3/ose-pod:v3.10.183"], "sizeBytes": 230820284}], "nodeInfo": {"architecture": "amd64", "bootID": "b2e4e6f7-37f6-4529-83a3-dfcbd3dac560", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-1062.4.1.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "29e4441f307c4ffb8e57ad44effb2c94", "operatingSystem": "linux", "osImage": "Red Hat Enterprise Linux Server 7.7 (Maipo)", "systemUUID": "9474AAD0-1632-4D27-BC13-611C55D017CA"}}}], "kind": "List", "metadata": {"resourceVersion": "", "selfLink": ""}}], "returncode": 0}, "state": "list"}
11-16 18:02:30  
11-16 18:02:30  TASK [Fail if no nodes match openshift_upgrade_nodes_label] ********************
11-16 18:02:30  skipping: [ci-vm-10-0-150-189.hosted.upshift.rdu2.redhat.com] => {"changed": false, "skip_reason": "Conditional result was False"}
11-16 18:02:30  
11-16 18:02:30  TASK [Map labelled nodes to inventory hosts] ***********************************
11-16 18:02:30  ok: [ci-vm-10-0-150-189.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-150-189.hosted.upshift.rdu2.redhat.com) => {"add_host": {"groups": ["temp_nodes_to_upgrade"], "host_name": "ci-vm-10-0-150-189.hosted.upshift.rdu2.redhat.com", "host_vars": {}}, "ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-150-189.hosted.upshift.rdu2.redhat.com"}
11-16 18:02:30  ok: [ci-vm-10-0-150-189.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-150-126.hosted.upshift.rdu2.redhat.com) => {"add_host": {"groups": ["temp_nodes_to_upgrade"], "host_name": "ci-vm-10-0-150-126.hosted.upshift.rdu2.redhat.com", "host_vars": {}}, "ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-150-126.hosted.upshift.rdu2.redhat.com"}
11-16 18:02:30  ok: [ci-vm-10-0-150-189.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-149-33.hosted.upshift.rdu2.redhat.com) => {"add_host": {"groups": ["temp_nodes_to_upgrade"], "host_name": "ci-vm-10-0-149-33.hosted.upshift.rdu2.redhat.com", "host_vars": {}}, "ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-149-33.hosted.upshift.rdu2.redhat.com"}
11-16 18:02:30  
11-16 18:02:30  TASK [Fail if temp_nodes_to_upgrade is empty with openshift_upgrade_nodes_label] ***
11-16 18:02:30  skipping: [ci-vm-10-0-150-189.hosted.upshift.rdu2.redhat.com] => {"changed": false, "skip_reason": "Conditional result was False"}
11-16 18:02:30  
11-16 18:02:30  TASK [Evaluate oo_nodes_to_upgrade] ********************************************
11-16 18:02:30  ok: [ci-vm-10-0-150-189.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-150-189.hosted.upshift.rdu2.redhat.com) => {"add_host": {"groups": ["oo_nodes_to_upgrade"], "host_name": "ci-vm-10-0-150-189.hosted.upshift.rdu2.redhat.com", "host_vars": {}}, "ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-150-189.hosted.upshift.rdu2.redhat.com"}
11-16 18:02:30  ok: [ci-vm-10-0-150-189.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-150-126.hosted.upshift.rdu2.redhat.com) => {"add_host": {"groups": ["oo_nodes_to_upgrade"], "host_name": "ci-vm-10-0-150-126.hosted.upshift.rdu2.redhat.com", "host_vars": {}}, "ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-150-126.hosted.upshift.rdu2.redhat.com"}
11-16 18:02:30  ok: [ci-vm-10-0-150-189.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-149-33.hosted.upshift.rdu2.redhat.com) => {"add_host": {"groups": ["oo_nodes_to_upgrade"], "host_name": "ci-vm-10-0-149-33.hosted.upshift.rdu2.redhat.com", "host_vars": {}}, "ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-149-33.hosted.upshift.rdu2.redhat.com"}

Comment 18 Russell Teague 2020-11-16 19:44:00 UTC
Reverting the change to address the regression.

Comment 20 Russell Teague 2020-11-17 13:44:29 UTC
Reverting bugzilla automation.

Comment 25 Johnny Liu 2020-11-23 05:40:20 UTC
Verified this bug with openshift-ansible-3.11.320-1.git.0.a1ff75c.el7.noarch, PASS.


11-23 12:27:16  TASK [Retrieve list of openshift nodes matching upgrade label] *****************
11-23 12:27:17  ok: [ci-vm-10-0-151-8.hosted.upshift.rdu2.redhat.com] => {"changed": false, "module_results": {"cmd": "/usr/bin/oc get node --selector=router=enabled -o json -n default", "results": [{"apiVersion": "v1", "items": [{"apiVersion": "v1", "kind": "Node", "metadata": {"annotations": {"node.openshift.io/md5sum": "5d361a21d3e1e5023fc3fecd7894b173", "volumes.kubernetes.io/controller-managed-attach-detach": "true"}, "creationTimestamp": "2020-11-23T04:04:06Z", "labels": {"beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "jialiu310node-registry-router-1", "registry": "enabled", "role": "node", "router": "enabled"}, "name": "jialiu310node-registry-router-1", "namespace": "", "resourceVersion": "7138", "selfLink": "/api/v1/nodes/jialiu310node-registry-router-1", "uid": "ee5ae993-2d40-11eb-85b6-fa163e5e6215"}, "spec": {"externalID": "jialiu310node-registry-router-1"}, "status": {"addresses": [{"address": "10.0.151.177", "type": "InternalIP"}, {"address": "jialiu310node-registry-router-1", "type": "Hostname"}], "allocatable": {"cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "3777852Ki", "pods": "250"}, "capacity": {"cpu": "2", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "3880252Ki", "pods": "250"}, "conditions": [{"lastHeartbeatTime": "2020-11-23T04:27:06Z", "lastTransitionTime": "2020-11-23T04:16:15Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk"}, {"lastHeartbeatTime": "2020-11-23T04:27:06Z", "lastTransitionTime": "2020-11-23T04:16:15Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure"}, {"lastHeartbeatTime": "2020-11-23T04:27:06Z", "lastTransitionTime": "2020-11-23T04:16:15Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure"}, {"lastHeartbeatTime": "2020-11-23T04:27:06Z", "lastTransitionTime": "2020-11-23T04:04:06Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure"}, {"lastHeartbeatTime": "2020-11-23T04:27:06Z", "lastTransitionTime": "2020-11-23T04:26:06Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready"}], "daemonEndpoints": {"kubeletEndpoint": {"Port": 10250}}, "images": [{"names": ["registry.reg-aws.openshift.com:443/openshift3/ose-node@sha256:24410a629ae9947fa8954020d5487b3e3f6fc322a2e3b13501c88ff620d82c9c", "registry.reg-aws.openshift.com:443/openshift3/ose-node:v3.10"], "sizeBytes": 1318296091}, {"names": ["registry.reg-aws.openshift.com:443/openshift3/ose-haproxy-router@sha256:c5a1c8d3e541a179d4f9ca8945971b1a4d9c03ecf11f17dcc64cbaa0d8f5e551", "registry.reg-aws.openshift.com:443/openshift3/ose-haproxy-router:v3.10"], "sizeBytes": 835711387}, {"names": ["registry.reg-aws.openshift.com:443/openshift3/ose-deployer@sha256:e4c89a608bfc56c635965bb4e299ece02996e2d6b4285ba2400429ad8acab692", "registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.10.181"], "sizeBytes": 815443997}, {"names": ["registry.reg-aws.openshift.com:443/openshift3/ose-docker-registry@sha256:a4b8f32cfbac611e042c55fde53f3b66247f873d957875578364db7530590ae6", "registry.reg-aws.openshift.com:443/openshift3/ose-docker-registry:v3.10"], "sizeBytes": 303204658}, {"names": ["registry.reg-aws.openshift.com:443/openshift3/ose-pod@sha256:1ec9bd41b9767663190e579df3204244a9fd90b6b320c22a131a098771c205ef", "registry.reg-aws.openshift.com:443/openshift3/ose-pod:v3.10", "registry.reg-aws.openshift.com:443/openshift3/ose-pod:v3.10.183"], "sizeBytes": 230820284}], "nodeInfo": {"architecture": "amd64", "bootID": "79a727c4-b2e3-402c-a8fb-15901b567371", "containerRuntimeVersion": "docker://1.13.1", "kernelVersion": "3.10.0-1062.4.1.el7.x86_64", "kubeProxyVersion": "v1.10.0+b81c8f8", "kubeletVersion": "v1.10.0+b81c8f8", "machineID": "29e4441f307c4ffb8e57ad44effb2c94", "operatingSystem": "linux", "osImage": "Red Hat Enterprise Linux Server 7.7 (Maipo)", "systemUUID": "53A87E43-4AA2-4399-869F-9574F2BADBBE"}}}], "kind": "List", "metadata": {"resourceVersion": "", "selfLink": ""}}], "returncode": 0}, "state": "list"}
11-23 12:27:17  
11-23 12:27:17  TASK [Fail if no nodes match openshift_upgrade_nodes_label] ********************
11-23 12:27:17  skipping: [ci-vm-10-0-151-8.hosted.upshift.rdu2.redhat.com] => {"changed": false, "skip_reason": "Conditional result was False"}
11-23 12:27:17  
11-23 12:27:17  TASK [Map labelled nodes to inventory hosts] ***********************************
11-23 12:27:17  skipping: [ci-vm-10-0-151-8.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-151-8.hosted.upshift.rdu2.redhat.com)  => {"ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-151-8.hosted.upshift.rdu2.redhat.com", "skip_reason": "Conditional result was False"}
11-23 12:27:17  ok: [ci-vm-10-0-151-8.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-151-177.hosted.upshift.rdu2.redhat.com) => {"add_host": {"groups": ["temp_nodes_to_upgrade"], "host_name": "ci-vm-10-0-151-177.hosted.upshift.rdu2.redhat.com", "host_vars": {}}, "ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-151-177.hosted.upshift.rdu2.redhat.com"}
11-23 12:27:17  skipping: [ci-vm-10-0-151-8.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-149-23.hosted.upshift.rdu2.redhat.com)  => {"ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-149-23.hosted.upshift.rdu2.redhat.com", "skip_reason": "Conditional result was False"}
11-23 12:27:17  
11-23 12:27:17  TASK [Fail if temp_nodes_to_upgrade is empty with openshift_upgrade_nodes_label] ***
11-23 12:27:17  skipping: [ci-vm-10-0-151-8.hosted.upshift.rdu2.redhat.com] => {"changed": false, "skip_reason": "Conditional result was False"}
11-23 12:27:17  
11-23 12:27:17  TASK [Evaluate oo_nodes_to_upgrade] ********************************************
11-23 12:27:17  ok: [ci-vm-10-0-151-8.hosted.upshift.rdu2.redhat.com] => (item=ci-vm-10-0-151-177.hosted.upshift.rdu2.redhat.com) => {"add_host": {"groups": ["oo_nodes_to_upgrade"], "host_name": "ci-vm-10-0-151-177.hosted.upshift.rdu2.redhat.com", "host_vars": {}}, "ansible_loop_var": "item", "changed": false, "item": "ci-vm-10-0-151-177.hosted.upshift.rdu2.redhat.com"}


[root@jialiu310master-etcd-nfs-1 ~]# oc get node
NAME                              STATUS    ROLES     AGE       VERSION
jialiu310master-etcd-nfs-1        Ready     master    1h        v1.11.0+d4cacc0
jialiu310node-1                   Ready     compute   1h        v1.10.0+b81c8f8
jialiu310node-registry-router-1   Ready     <none>    1h        v1.11.0+d4cacc0


[root@jialiu310node-registry-router-1 ~]# openshift version
openshift v3.11.320


Only matched nodes by node label filter get upgraded.

Comment 28 errata-xmlrpc 2020-12-16 12:35:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 3.11.343 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5363


Note You need to log in before you can comment on or make changes to this bug.