Bug 1994729 - upgrade from 4.6 to 4.7 to 4.8 with mcp worker "paused=true", crio report "panic: close of closed channel" which lead to a master Node go into Restart loop
Summary: upgrade from 4.6 to 4.7 to 4.8 with mcp worker "paused=true", crio report "p...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 4.7
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.7.z
Assignee: Peter Hunt
QA Contact: Sunil Choudhary
URL:
Whiteboard:
Depends On: 1994728
Blocks: 1994730
TreeView+ depends on / blocked
 
Reported: 2021-08-17 19:07 UTC by OpenShift BugZilla Robot
Modified: 2021-09-01 18:24 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-09-01 18:24:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github cri-o cri-o pull 5214 0 None None None 2021-08-18 13:20:52 UTC
Red Hat Product Errata RHSA-2021:3262 0 None None None 2021-09-01 18:24:17 UTC

Description OpenShift BugZilla Robot 2021-08-17 19:07:00 UTC
+++ This bug was initially created as a clone of Bug #1994728 +++

+++ This bug was initially created as a clone of Bug #1994454 +++

Description of problem:
when upgrading a cluster from 4.6 to 4.7 to 4.8, and set mcp worker "paused=true" before upgrade, the cluster upgrade to 4.7 successfully, but upgrade to 4.8 fail, and the crio of one master node report "panic: close of closed channel", and kubelet report "Failed to create existing container".

Version-Release number of selected component (if applicable):
4.6.0-0.nightly-2021-08-16-005317
4.7.24
4.8.5

How reproducible:
hit once

Steps to Reproduce:
1.create a 4.6 cluster

2.$oc edit machineconfigpool/worker 
and set "paused: true', then upgrade the cluster to 4.7.24

3.upgrade the cluster to 4.8.5

Actual results:
2.upgrade successfully.
3.the upgrade got stuck in one master node "NotReady", the node kept restarting all the time.

Expected results:
3.upgrade to 4.8.5 successfully

Additional info:
$ oc get mcp 
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-98cd31e71ddd99d04c459bea53542351   False     True       False      3              2                   3                     0                      26h
worker   rendered-worker-807445e805cb634439d34c9a1312b0ad   False     False      False      3              0                   0                     0                      26h

$ oc get node -o wide 
NAME                                         STATUS     ROLES    AGE   VERSION           INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                                                       KERNEL-VERSION                 CONTAINER-RUNTIME
ip-10-0-139-77.us-east-2.compute.internal    NotReady   master   26h   v1.21.1+9807387   10.0.139.77    <none>        Red Hat Enterprise Linux CoreOS 48.84.202108062347-0 (Ootpa)   4.18.0-305.10.2.el8_4.x86_64   cri-o://1.21.2-9.rhaos4.8.git3b87110.el8
ip-10-0-153-102.us-east-2.compute.internal   Ready      worker   26h   v1.19.0+4c3480d   10.0.153.102   <none>        Red Hat Enterprise Linux CoreOS 46.82.202108052057-0 (Ootpa)   4.18.0-193.60.2.el8_2.x86_64   cri-o://1.19.3-9.rhaos4.6.gitc8a7d88.el8
ip-10-0-168-232.us-east-2.compute.internal   Ready      worker   26h   v1.19.0+4c3480d   10.0.168.232   <none>        Red Hat Enterprise Linux CoreOS 46.82.202108052057-0 (Ootpa)   4.18.0-193.60.2.el8_2.x86_64   cri-o://1.19.3-9.rhaos4.6.gitc8a7d88.el8
ip-10-0-169-210.us-east-2.compute.internal   Ready      master   26h   v1.21.1+9807387   10.0.169.210   <none>        Red Hat Enterprise Linux CoreOS 48.84.202108062347-0 (Ootpa)   4.18.0-305.10.2.el8_4.x86_64   cri-o://1.21.2-9.rhaos4.8.git3b87110.el8
ip-10-0-204-62.us-east-2.compute.internal    Ready      master   26h   v1.21.1+9807387   10.0.204.62    <none>        Red Hat Enterprise Linux CoreOS 48.84.202108062347-0 (Ootpa)   4.18.0-305.10.2.el8_4.x86_64   cri-o://1.21.2-9.rhaos4.8.git3b87110.el8
ip-10-0-214-48.us-east-2.compute.internal    Ready      worker   26h   v1.19.0+4c3480d   10.0.214.48    <none>        Red Hat Enterprise Linux CoreOS 46.82.202108052057-0 (Ootpa)   4.18.0-193.60.2.el8_2.x86_64   cri-o://1.19.3-9.rhaos4.6.gitc8a7d88.el8

$ oc describe node ip-10-0-139-77.us-east-2.compute.internal
...
  Normal   Starting                 106s                  kubelet, ip-10-0-139-77.us-east-2.compute.internal  Starting kubelet.
  Normal   NodeHasSufficientMemory  105s                  kubelet, ip-10-0-139-77.us-east-2.compute.internal  Node ip-10-0-139-77.us-east-2.compute.internal status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    105s                  kubelet, ip-10-0-139-77.us-east-2.compute.internal  Node ip-10-0-139-77.us-east-2.compute.internal status is now: NodeHasNoDiskPressure
  Normal   NodeAllocatableEnforced  105s                  kubelet, ip-10-0-139-77.us-east-2.compute.internal  Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientPID     105s                  kubelet, ip-10-0-139-77.us-east-2.compute.internal  Node ip-10-0-139-77.us-east-2.compute.internal status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientPID     95s                   kubelet, ip-10-0-139-77.us-east-2.compute.internal  Node ip-10-0-139-77.us-east-2.compute.internal status is now: NodeHasSufficientPID
  Normal   Starting                 95s                   kubelet, ip-10-0-139-77.us-east-2.compute.internal  Starting kubelet.
  Normal   NodeHasSufficientMemory  95s                   kubelet, ip-10-0-139-77.us-east-2.compute.internal  Node ip-10-0-139-77.us-east-2.compute.internal status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    95s                   kubelet, ip-10-0-139-77.us-east-2.compute.internal  Node ip-10-0-139-77.us-east-2.compute.internal status is now: NodeHasNoDiskPressure
  Normal   NodeAllocatableEnforced  94s                   kubelet, ip-10-0-139-77.us-east-2.compute.internal  Updated Node Allocatable limit across pods
...

--- Additional comment from minmli on 2021-08-17 10:24:16 UTC ---

crio log: 

Aug 17 09:28:50 ip-10-0-139-77 crio[1298394]: time="2021-08-17 09:28:50.783221573Z" level=info msg="Checking image status: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cecb7839b484569a1c2826af500ff1107f811725296c58d13e0f5df6a10e882b" id=863cd639-eff9-496e-9200-b90cae4a3db1 name=/runtime.v1alpha2.ImageService/ImageStatus
Aug 17 09:28:50 ip-10-0-139-77 crio[1298394]: time="2021-08-17 09:28:50.785152581Z" level=info msg="Image status: &{0xc0001e4a10 map[]}" id=863cd639-eff9-496e-9200-b90cae4a3db1 name=/runtime.v1alpha2.ImageService/ImageStatus
Aug 17 09:28:51 ip-10-0-139-77 crio[1298394]: panic: close of closed channel
Aug 17 09:28:51 ip-10-0-139-77 crio[1298394]: goroutine 1323 [running]:
Aug 17 09:28:51 ip-10-0-139-77 crio[1298394]: panic(0x5573ffea0e20, 0x55740013b840)
Aug 17 09:28:51 ip-10-0-139-77 crio[1298394]:         /usr/lib/golang/src/runtime/panic.go:1065 +0x565 fp=0xc00063d5b0 sp=0xc00063d4e8 pc=0x5573fe02b4a5
Aug 17 09:28:51 ip-10-0-139-77 crio[1298394]: runtime.closechan(0xc00185aba0)
Aug 17 09:28:51 ip-10-0-139-77 crio[1298394]:         /usr/lib/golang/src/runtime/chan.go:363 +0x3f5 fp=0xc00063d5f0 sp=0xc00063d5b0 pc=0x5573fdff97b5
Aug 17 09:28:51 ip-10-0-139-77 crio[1298394]: github.com/cri-o/cri-o/internal/oci.(*runtimeOCI).ReopenContainerLog(0xc0005cbe30, 0x5574001b0240, 0xc000d1fad0, 0xc000dee000, 0x55740015e5f8, 0xc002024d50)
Aug 17 09:28:51 ip-10-0-139-77 crio[1298394]:         /builddir/build/BUILD/cri-o-3b87110e2208fd889e35829d92f4400742568fa6/_output/src/github.com/cri-o/cri-o/internal/oci/runtime_oci.go:1050 +0x7c6 fp=0xc00063d758 sp=0xc00063d5f0 pc=0x5573ff65e186
Aug 17 09:28:51 ip-10-0-139-77 crio[1298394]: github.com/cri-o/cri-o/internal/oci.(*Runtime).ReopenContainerLog(0xc0006aff50, 0x5574001b0240, 0xc000d1fad0, 0xc000dee000, 0x0, 0x0)
Aug 17 09:28:51 ip-10-0-139-77 crio[1298394]:         /builddir/build/BUILD/cri-o-3b87110e2208fd889e35829d92f4400742568fa6/_output/src/github.com/cri-o/cri-o/internal/oci/oci.go:433 +0x93 fp=0xc00063d798 sp=0xc00063d758 pc=0x5573ff654113
Aug 17 09:28:51 ip-10-0-139-77 crio[1298394]: github.com/cri-o/cri-o/server.(*Server).ReopenContainerLog(0xc0001b6000, 0x5574001b0240, 0xc000d1fad0, 0xc00063d848, 0x55740000e300, 0x5574000c9c80)
Aug 17 09:28:51 ip-10-0-139-77 crio[1298394]:         /builddir/build/BUILD/cri-o-3b87110e2208fd889e35829d92f4400742568fa6/_output/src/github.com/cri-o/cri-o/server/container_reopen_log.go:28 +0x14f fp=0xc00063d818 sp=0xc00063d798 pc=0x5573ff70b88f
Aug 17 09:28:51 ip-10-0-139-77 crio[1298394]: github.com/cri-o/cri-o/server/cri/v1alpha2.(*service).ReopenContainerLog(0xc0000134e0, 0x5574001b0240, 0xc000d1fad0, 0xc002024ca8, 0xc0000134e0, 0x1, 0x1)
Aug 17 09:39:44 ip-10-0-139-77 crio[1417705]: github.com/cri-o/cri-o/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc001073660, 0x0, 0x0, 0x0)
Aug 17 09:39:44 ip-10-0-139-77 crio[1417705]:         /builddir/build/BUILD/cri-o-3b87110e2208fd889e35829d92f4400742568fa6/_output/src/github.com/cri-o/cri-o/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x93 fp=0xc0011cdc98 sp=0xc0011cdbd0 pc=0x559fb5efa893
Aug 17 09:39:44 ip-10-0-139-77 crio[1417705]: github.com/cri-o/cri-o/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0006b4780)
Aug 17 09:39:44 ip-10-0-139-77 crio[1417705]:         /builddir/build/BUILD/cri-o-3b87110e2208fd889e35829d92f4400742568fa6/_output/src/github.com/cri-o/cri-o/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206 fp=0xc0011ddfd8 sp=0xc0011cdc98 pc=0x559fb5ef9aa6
Aug 17 09:39:44 ip-10-0-139-77 crio[1417705]: runtime.goexit()
Aug 17 09:39:44 ip-10-0-139-77 crio[1417705]:         /usr/lib/golang/src/runtime/asm_amd64.s:1371 +0x1 fp=0xc0011ddfe0 sp=0xc0011ddfd8 pc=0x559fb5709901
Aug 17 09:39:44 ip-10-0-139-77 crio[1417705]: created by github.com/cri-o/cri-o/vendor/github.com/fsnotify/fsnotify.NewWatcher
Aug 17 09:39:44 ip-10-0-139-77 crio[1417705]:         /builddir/build/BUILD/cri-o-3b87110e2208fd889e35829d92f4400742568fa6/_output/src/github.com/cri-o/cri-o/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1ab
Aug 17 09:39:45 ip-10-0-139-77 systemd[1]: crio.service: Main process exited, code=killed, status=6/ABRT
Aug 17 09:39:45 ip-10-0-139-77 systemd[1]: crio.service: Failed with result 'signal'.
Aug 17 09:39:45 ip-10-0-139-77 systemd[1]: crio.service: Consumed 4.162s CPU time
Aug 17 09:39:45 ip-10-0-139-77 systemd[1]: crio.service: Service RestartSec=100ms expired, scheduling restart.
Aug 17 09:39:45 ip-10-0-139-77 systemd[1]: crio.service: Scheduled restart job, restart counter is at 742.
Aug 17 09:39:45 ip-10-0-139-77 systemd[1]: Stopped Open Container Initiative Daemon.
Aug 17 09:39:45 ip-10-0-139-77 systemd[1]: crio.service: Consumed 4.162s CPU time
Aug 17 09:39:45 ip-10-0-139-77 systemd[1]: Starting Open Container Initiative Daemon...
Aug 17 09:39:45 ip-10-0-139-77 crio[1419537]: time="2021-08-17 09:39:45.465278484Z" level=info msg="Starting CRI-O, version: 1.21.2-9.rhaos4.8.git3b87110.el8, git: ()"
Aug 17 09:39:45 ip-10-0-139-77 crio[1419537]: time="2021-08-17 09:39:45.465418754Z" level=info msg="Node configuration value for hugetlb cgroup is true"
Aug 17 09:39:45 ip-10-0-139-77 crio[1419537]: time="2021-08-17 09:39:45.465429178Z" level=info msg="Node configuration value for pid cgroup is true"
Aug 17 09:39:45 ip-10-0-139-77 crio[1419537]: time="2021-08-17 09:39:45.465449326Z" level=info msg="Node configuration value for memoryswap cgroup is true"
Aug 17 09:39:45 ip-10-0-139-77 crio[1419537]: time="2021-08-17 09:39:45.473891730Z" level=info msg="Node configuration value for systemd CollectMode is true"
Aug 17 09:39:45 ip-10-0-139-77 crio[1419537]: time="2021-08-17 09:39:45.481865984Z" level=info msg="Node configuration value for systemd AllowedCPUs is true"
Aug 17 09:39:45 ip-10-0-139-77 crio[1419537]: time="2021-08-17 09:39:45.492177440Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"

--- Additional comment from minmli on 2021-08-17 10:24:47 UTC ---

kubelet log:

Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: I0817 09:29:32.229468 1307272 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: I0817 09:29:32.229599 1307272 server.go:1190] "Started kubelet"
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: E0817 09:29:32.230147 1307272 kubelet.go:1311] "Image garbage collection failed once. Stats initialization 
may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache"
Aug 17 09:29:32 ip-10-0-139-77 systemd[1]: Started Kubernetes Kubelet.
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: I0817 09:29:32.232665 1307272 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: I0817 09:29:32.234984 1307272 server.go:405] "Adding debug handlers to kubelet server"
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: I0817 09:29:32.241854 1307272 certificate_manager.go:282] Certificate rotation is enabled.
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: I0817 09:29:32.241889 1307272 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: I0817 09:29:32.241897 1307272 certificate_manager.go:556] Certificate expiration is 2021-09-16 02:38:43 +00
00 UTC, rotation deadline is 2021-09-08 05:22:04.727863975 +0000 UTC
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: I0817 09:29:32.342568 1307272 kubelet_node_status.go:453] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="us-east-2"
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: I0817 09:29:32.342579 1307272 kubelet_node_status.go:455] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="us-east-2"
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: E0817 09:29:32.372827 1307272 manager.go:1127] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod548969ba_a0c1_4a2a_b164_66a148ceea54.slice/crio-2d3b24a34ee60d22699478bbe8cc3c89075fb7642f89b0f71d5a59180b7f81c5.scope: Error finding container 2d3b24a34ee60d22699478bbe8cc3c89075fb7642f89b0f71d5a59180b7f81c5: Status 404 returned error &{%!s(*http.body=&{0xc000c64048 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x77baa0) %!s(func() error=0x77ba20)}
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: E0817 09:29:32.389743 1307272 manager.go:1127] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c996063_2362_45f6_a84f_e71dd5df4f9d.slice/crio-9ca73cbb686e2c9fd01ac8a779d8493d6ac8efeae2a818a616efc61fabca537f.scope: Error finding container 9ca73cbb686e2c9fd01ac8a779d8493d6ac8efeae2a818a616efc61fabca537f: Status 404 returned error &{%!s(*http.body=&{0xc000536dc8 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x77baa0) %!s(func() error=0x77ba20)}
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: E0817 09:29:32.392662 1307272 manager.go:1127] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod376e132a_5317_453f_b851_05b1fa45e49b.slice/crio-092318ed8473a8596c035c2dbdc9ea7fec00437cc89b7c5024f183fa18eab39a.scope: Error finding container 092318ed8473a8596c035c2dbdc9ea7fec00437cc89b7c5024f183fa18eab39a: Status 404 returned error &{%!s(*http.body=&{0xc000eb45d0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x77baa0) %!s(func() error=0x77ba20)}
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: E0817 09:29:32.399610 1307272 manager.go:1127] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode34dd56c_05bb_41a5_9304_910162ffb342.slice/crio-3c7e04fdca0944613e666ba7d0c961b7b9f2ee889a8d2349230cdd3d10d53a00.scope: Error finding container 3c7e04fdca0944613e666ba7d0c961b7b9f2ee889a8d2349230cdd3d10d53a00: Status 404 returned error &{%!s(*http.body=&{0xc000c64558 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x77baa0) %!s(func() error=0x77ba20)}
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: E0817 09:29:32.413027 1307272 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: <nil>, extraDiskErr: could not stat "/var/log/pods/openshift-kube-apiserver_kube-apiserver-ip-10-0-139-77.us-east-2.compute.internal_376e132a-5317-453f-b851-05b1fa45e49b/kube-apiserver-insecure-readyz/1.log" to get inode usage: stat /var/log/pods/openshift-kube-apiserver_kube-apiserver-ip-10-0-139-77.us-east-2.compute.internal_376e132a-5317-453f-b851-05b1fa45e49b/kube-apiserver-insecure-readyz/1.log: no such file or directory
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: I0817 09:29:32.415080 1307272 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: I0817 09:29:32.415105 1307272 status_manager.go:157] "Starting to sync pod status with apiserver"
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: I0817 09:29:32.415124 1307272 kubelet.go:1858] "Starting kubelet main sync loop"
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: E0817 09:29:32.415161 1307272 kubelet.go:1882] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Aug 17 09:29:32 ip-10-0-139-77 hyperkube[1307272]: I0817 09:29:32.415497 1307272 reflector.go:219] Starting reflector *v1.RuntimeClass (0s) from k8s.io/client-go/informers/factory.go:134

--- Additional comment from minmli on 2021-08-17 10:34:15 UTC ---

please access the must-gather via: http://file.nay.redhat.com/~minmli/eus-upgrade-must-gather.tar.gz

--- Additional comment from pehunt on 2021-08-17 19:05:21 UTC ---

fixed in attached PR, will backport to 4.9-4.6

Comment 2 Peter Hunt 2021-08-18 13:20:56 UTC
Pr merged

Comment 5 Sunil Choudhary 2021-08-26 13:44:51 UTC
Verified on 4.7.26 nightly build.

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.nightly-2021-08-25-190633   True        False         90m     Cluster version is 4.6.0-0.nightly-2021-08-25-190633

$ oc get machineconfigpool worker -o yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
  creationTimestamp: "2021-08-26T09:58:37Z"
  generation: 3
  labels:
    machineconfiguration.openshift.io/mco-built-in: ""
    pools.operator.machineconfiguration.openshift.io/worker: ""
  name: worker
  resourceVersion: "52970"
  selfLink: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker
  uid: 5a28dd37-409d-4626-90bd-ed1852511461
spec:
  configuration:
    name: rendered-worker-8e575f0a99d70dd786512889ddc3e642
    source:
    - apiVersion: machineconfiguration.openshift.io/v1
      kind: MachineConfig
      name: 00-worker
    - apiVersion: machineconfiguration.openshift.io/v1
      kind: MachineConfig
      name: 01-worker-container-runtime
    - apiVersion: machineconfiguration.openshift.io/v1
      kind: MachineConfig
      name: 01-worker-kubelet
    - apiVersion: machineconfiguration.openshift.io/v1
      kind: MachineConfig
      name: 99-worker-generated-registries
    - apiVersion: machineconfiguration.openshift.io/v1
      kind: MachineConfig
      name: 99-worker-ssh
  machineConfigSelector:
    matchLabels:
      machineconfiguration.openshift.io/role: worker
  nodeSelector:
    matchLabels:
      node-role.kubernetes.io/worker: ""
  paused: true
  ...


$ oc adm upgrade --to-image=quay.io/openshift-release-dev/ocp-release:4.7.26-x86_64 --force --allow-explicit-upgrade
...
Updating to release image quay.io/openshift-release-dev/ocp-release:4.7.26-x86_64

$ oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.26    True        False         25m     Cluster version is 4.7.26

$ oc get mcp
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-538c233877e7e578d2b4d1f3329579a4   True      False      False      3              3                   3                     0                      3h27m
worker   rendered-worker-8e575f0a99d70dd786512889ddc3e642   False     False      False      3              0                   0                     0                      3h27m

$ oc get nodes -o wide
NAME                                          STATUS   ROLES    AGE     VERSION           INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                                                       KERNEL-VERSION                 CONTAINER-RUNTIME
ip-10-0-147-150.ap-south-1.compute.internal   Ready    master   3h29m   v1.20.0+4593a24   10.0.147.150   <none>        Red Hat Enterprise Linux CoreOS 47.84.202108181031-0 (Ootpa)   4.18.0-305.12.1.el8_4.x86_64   cri-o://1.20.4-11.rhaos4.7.git9d682e1.el8
ip-10-0-152-34.ap-south-1.compute.internal    Ready    worker   3h16m   v1.19.0+4c3480d   10.0.152.34    <none>        Red Hat Enterprise Linux CoreOS 46.82.202108251457-0 (Ootpa)   4.18.0-193.60.2.el8_2.x86_64   cri-o://1.19.3-11.rhaos4.6.git66a69b8.el8
ip-10-0-170-36.ap-south-1.compute.internal    Ready    worker   3h20m   v1.19.0+4c3480d   10.0.170.36    <none>        Red Hat Enterprise Linux CoreOS 46.82.202108251457-0 (Ootpa)   4.18.0-193.60.2.el8_2.x86_64   cri-o://1.19.3-11.rhaos4.6.git66a69b8.el8
ip-10-0-173-42.ap-south-1.compute.internal    Ready    master   3h29m   v1.20.0+4593a24   10.0.173.42    <none>        Red Hat Enterprise Linux CoreOS 47.84.202108181031-0 (Ootpa)   4.18.0-305.12.1.el8_4.x86_64   cri-o://1.20.4-11.rhaos4.7.git9d682e1.el8
ip-10-0-200-20.ap-south-1.compute.internal    Ready    master   3h29m   v1.20.0+4593a24   10.0.200.20    <none>        Red Hat Enterprise Linux CoreOS 47.84.202108181031-0 (Ootpa)   4.18.0-305.12.1.el8_4.x86_64   cri-o://1.20.4-11.rhaos4.7.git9d682e1.el8
ip-10-0-204-249.ap-south-1.compute.internal   Ready    worker   3h20m   v1.19.0+4c3480d   10.0.204.249   <none>        Red Hat Enterprise Linux CoreOS 46.82.202108251457-0 (Ootpa)   4.18.0-193.60.2.el8_2.x86_64   cri-o://1.19.3-11.rhaos4.6.git66a69b8.el8

Comment 7 errata-xmlrpc 2021-09-01 18:24:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 4.7.28 security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:3262


Note You need to log in before you can comment on or make changes to this bug.