+++ This bug was initially created as a clone of Bug #1809283 +++ Description of problem: When Multus is upgraded it invalidates the primary CNI configuration on each node as it is upgraded. This causes unnecessary delays, and can result How reproducible: During upgrades. Additional info: One of the fixes for https://bugzilla.redhat.com/show_bug.cgi?id=1793635 --- Additional comment from Douglas Smith on 2020-03-02 19:09:00 UTC --- PR @ https://github.com/openshift/multus-cni/pull/52
Tested and verified in 4.4.0-0.nightly-2020-03-05-100046 [root@dhcp-41-193 Network]# oc exec multus-5k5jr -- cat /entrypoint.sh | grep invalid [root@dhcp-41-193 Network]# oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.0-0.nightly-2020-03-05-100046 True False 5h23m Cluster version is 4.4.0-0.nightly-2020-03-05-100046 [root@dhcp-41-193 Network]# Without fixed PR, cat /entrypoint.sh will show: [root@dhcp-41-193 FILE]# oc exec multus-fp4rb -- cat /entrypoint.sh | grep invalid {Multus configuration intentionally invalidated to prevent pods from being scheduled.} log "Multus configuration intentionally invalidated to prevent pods from being scheduled." # But first, check if it has the invalidated configuration in it (otherwise we keep doing this over and over.) if ! grep -q "invalidated" $CNI_CONF_DIR/00-multus.conf; then
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581