This bz is essentially the installer spin-off of https://bugzilla.redhat.com/show_bug.cgi?id=1908389. https://github.com/kubernetes/kubernetes/pull/95542 adds new requirements for Azure nic ID fields. Specifically, the `nicIDRE` regex (https://github.com/kubernetes/kubernetes/pull/95542/files#diff-0414c3aba906b2c0cdb2f09da32bd45c6bf1df71cbb2fc55950743c99a4a5fe4R79) expects the nic ID to have something along the lines of <node-name>-nic-<nic-num?>. Currently, the nic ID string follows the <node-name>-nic format. https://github.com/kubernetes/kubernetes/issues/97352 is a recently created upstream issue that requests the azure provider to not hard fail on nicIDRE regex mismatches. Regardless, the installer should consider pivoting from the current nic naming scheme to the new upstream conventions, if possible. Naturally, variances in this field across cluster versions will be a concern.
This issue is blocking the WMCO bump to 1.20 PR [0] from merging causing all PRs being blocked from merging into master for the team. It will help if the priority and severity on this to be raised to urgent. [0] https://github.com/openshift/windows-machine-config-operator/pull/230
(In reply to Aravindh Puthiyaparambil from comment #3) > This issue is blocking the WMCO bump to 1.20 PR [0] from merging causing all > PRs being blocked from merging into master for the team. It will help if the > priority and severity on this to be raised to urgent. > > [0] https://github.com/openshift/windows-machine-config-operator/pull/230 The blocking bug is https://bugzilla.redhat.com/show_bug.cgi?id=1908389, not this one.
An install with fixed naming, which is what bug 1908389 is about, will unblock jobs that are about fresh installs. I think this bug is about how we handle existing clusters which were created with our old naming, since those will currently break if installed to a version with the new, restrictive cloud-provider code.
I flipped my bug sense in comment 6. This installer bug is the fresh-install bug. Bug 1908389 is about the cloud provider, and either getting some in-cluster change to support our existing infra or finding some infra migration to work with the upstream cloud provider.
*** This bug has been marked as a duplicate of bug 1908389 ***