This has been effectively fixed and closed via https://github.com/openshift/machine-config-operator/pull/1166 but for some reason the BZ hasn't moved to MODIFIED->ON_QA
The PR was not automatically moved to MODIFIED because [1] remains open. Manually moving it to MODIFIED as you've done says "we don't actually need [1] to fix this issue", but there's no way for the robots to figure that out on their own (yet! ;). [1]: https://github.com/openshift/cluster-api/pull/132
Verified on 4.1.0-0.nightly-2020-02-04-094220 $ oc get nodes NAME STATUS ROLES AGE VERSION ip-10-0-130-234.ec2.internal Ready worker 53m v1.13.4+125a3c441 ip-10-0-135-69.ec2.internal Ready master 58m v1.13.4+125a3c441 ip-10-0-145-208.ec2.internal Ready master 58m v1.13.4+125a3c441 ip-10-0-155-197.ec2.internal Ready worker 53m v1.13.4+125a3c441 ip-10-0-169-8.ec2.internal Ready worker 53m v1.13.4+125a3c441 ip-10-0-174-89.ec2.internal Ready master 58m v1.13.4+125a3c441 $ oc adm upgrade --force --to-image=quay.io/openshift-release-dev/ocp-release:4.2.16-x86_64 Updating to release image quay.io/openshift-release-dev/ocp-release:4.2.16-x86_64 $ watch oc get clusterversion $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.2.16 True False False 160m cloud-credential 4.2.16 True False False 170m cluster-autoscaler 4.2.16 True False False 170m console 4.2.16 True False False 80m dns 4.2.16 True False False 170m image-registry 4.2.16 True False False 80m ingress 4.2.16 True False False 165m insights 4.2.16 True False False 103m kube-apiserver 4.2.16 True False False 167m kube-controller-manager 4.2.16 True False False 168m kube-scheduler 4.2.16 True False False 167m machine-api 4.2.16 True False False 170m machine-config 4.2.16 True False False 169m marketplace 4.2.16 True False False 75m monitoring 4.2.16 True False False 58m network 4.2.16 True False False 170m node-tuning 4.2.16 True False False 76m openshift-apiserver 4.2.16 True False False 59m openshift-controller-manager 4.2.16 True False False 169m openshift-samples 4.2.16 True False False 102m operator-lifecycle-manager 4.2.16 True False False 169m operator-lifecycle-manager-catalog 4.2.16 True False False 169m operator-lifecycle-manager-packageserver 4.2.16 True False False 75m service-ca 4.2.16 True False False 170m service-catalog-apiserver 4.2.16 True False False 166m service-catalog-controller-manager 4.2.16 True False False 166m storage 4.2.16 True False False 103m $ oc get node NAME STATUS ROLES AGE VERSION ip-10-0-130-234.ec2.internal Ready worker 173m v1.14.6+97c81d00e ip-10-0-135-69.ec2.internal Ready master 178m v1.14.6+97c81d00e ip-10-0-145-208.ec2.internal Ready master 178m v1.14.6+97c81d00e ip-10-0-155-197.ec2.internal Ready worker 173m v1.14.6+97c81d00e ip-10-0-169-8.ec2.internal Ready worker 173m v1.14.6+97c81d00e ip-10-0-174-89.ec2.internal Ready master 178m v1.14.6+97c81d00e $ oc debug node/ip-10-0-130-234.ec2.internal -- chroot /host journalctl | grep -i drain Starting pod/ip-10-0-130-234ec2internal-debug ... To use host binaries, run `chroot /host` Feb 05 00:45:28 ip-10-0-130-234 root[13301]: machine-config-daemon[135215]: Update prepared; beginning drain Feb 05 00:46:29 ip-10-0-130-234 root[16064]: machine-config-daemon[135215]: drain complete Removing debug pod ... $ oc debug node/ip-10-0-135-69.ec2.internal -- chroot /host journalctl | grep -i drain Starting pod/ip-10-0-135-69ec2internal-debug ... To use host binaries, run `chroot /host` Feb 05 00:42:29 ip-10-0-135-69 root[6780]: machine-config-daemon[143824]: Update prepared; beginning drain Feb 05 00:43:17 ip-10-0-135-69 root[11871]: machine-config-daemon[143824]: drain complete Removing debug pod ...
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0399