Bug 1982003

Summary: [4.7.z] On a Azure IPI installation MCO fails to create new nodes
Product: OpenShift Container Platform Reporter: Benjamin Gilbert <bgilbert>
Component: RHCOSAssignee: Benjamin Gilbert <bgilbert>
Status: CLOSED ERRATA QA Contact: Michael Nguyen <mnguyen>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.7CC: bgilbert, bverschu, dornelas, jligon, miabbott, mnguyen, mrussell, nstielau, smilner, vmedina
Target Milestone: ---   
Target Release: 4.7.z   
Hardware: x86_64   
OS: Linux   
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1982002
: 1982004 (view as bug list) Environment:
Last Closed: 2021-10-12 19:51:42 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On: 1964753, 1980679, 1982001, 1982002    
Bug Blocks: 1982004    

Comment 2 Benjamin Gilbert 2021-07-15 20:23:12 UTC
Needs a bootimage bump; moving back to POST.

Comment 3 Benjamin Gilbert 2021-07-16 17:03:39 UTC
Landed in Git; waiting for bootimage bump.

Comment 4 RHCOS Bug Bot 2021-10-01 16:49:36 UTC
The fix for this bug has landed in a bootimage bump, as tracked in bug 1964753 (now in status MODIFIED).  Moving this bug to MODIFIED.

Comment 7 Micah Abbott 2021-10-06 15:02:17 UTC
Verified with 4.7.0-0.nightly-2021-10-05-183054; afterburn checkin service runs after ignition fetch.

[miabbott@toolbox (container) ~/openshift-cluster-installs ]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.0-0.nightly-2021-10-05-183054   True        False         23m     Cluster version is 4.7.0-0.nightly-2021-10-05-183054
[miabbott@toolbox (container) ~/openshift-cluster-installs ]$ oc get nodes
NAME                                              STATUS   ROLES    AGE   VERSION
ci-ln-xh5zgs2-002ac-tjkn8-master-0                Ready    master   43m   v1.20.0+bbbc079
ci-ln-xh5zgs2-002ac-tjkn8-master-1                Ready    master   42m   v1.20.0+bbbc079
ci-ln-xh5zgs2-002ac-tjkn8-master-2                Ready    master   43m   v1.20.0+bbbc079
ci-ln-xh5zgs2-002ac-tjkn8-worker-eastus21-rw8pz   Ready    worker   34m   v1.20.0+bbbc079
ci-ln-xh5zgs2-002ac-tjkn8-worker-eastus22-q9dmq   Ready    worker   33m   v1.20.0+bbbc079
ci-ln-xh5zgs2-002ac-tjkn8-worker-eastus23-nnjrl   Ready    worker   34m   v1.20.0+bbbc079
[miabbott@toolbox (container) ~/openshift-cluster-installs ]$ oc debug node/ci-ln-xh5zgs2-002ac-tjkn8-worker-eastus22-q9dmq
Starting pod/ci-ln-xh5zgs2-002ac-tjkn8-worker-eastus22-q9dmq-debug ...
To use host binaries, run `chroot /host`
Pod IP:
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-4.4# grep ^After /usr/lib/dracut/modules.d/30rhcos-afterburn-checkin/rhcos-afterburn-checkin.service
sh-4.4# journalctl -u ignition-fetch | grep -i start; journalctl | grep coreos-kargs-reboot; journalctl -u hcos-afterburn-checkin | grep -i start
Oct 06 14:21:47 localhost systemd[1]: Starting Ignition (fetch)...
Oct 06 14:21:47 localhost ignition[761]: op(1): [started]  mounting "/dev/disk/by-id/ata-Virtual_CD" at "/tmp/ignition-azure074386380"
Oct 06 14:21:47 localhost ignition[761]: op(2): [started]  unmounting "/dev/disk/by-id/ata-Virtual_CD" at "/tmp/ignition-azure074386380"
Oct 06 14:22:09 localhost systemd[1]: Started Ignition (fetch).
sh-4.4# rpm-ostree status
State: idle
* pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ddb23fc7b54080594ec39d624e2ede82f76db63ce0a729e6ecf43543088d3db
              CustomOrigin: Managed by machine-config-operator
                   Version: 47.84.202110041927-0 (2021-10-04T19:30:35Z)

                   Version: 47.84.202109241831-0 (2021-09-24T18:34:26Z)

Comment 9 errata-xmlrpc 2021-10-12 19:51:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.7.33 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.