Description of problem: NNCP status is not getting calculated and is unknown when one of the node of a cluster is down or in restart phase. This is a very realistic use case as all the nodes of a cluster can't be up all times and in that phase if nncp is configured it is not getting configured. Here we have two cases : 1. Node is never up 2. Node gets up Case 1 : if node is never up, oc get nncp will not show anything and status is not calculated. $ oc get nncp NAME STATUS nncp-maxunavailable Case 2: if node gets up, it will directly goto "Abort" status and nncp status moved to "Degraded". Version-Release number of selected component (if applicable): 4.9 How reproducible: always Steps to Reproduce: 1. Make a node down (as a big cluster can't have all node up or maybe due to someissues it went down so it is very realistic use case.) 2. Now configure policy on the cluster. 3. You will see all available nodes have either abort, failing, available status but still if you check nncp status it is empty. $ oc get nncp NAME STATUS nncp-maxunavailable 4. Now make that node up. It will directly goto "Abort" and nncp status moved to "Degraded". Actual results: nncp status is not getting calculated Expected results: result should be calculated based on current available nodes not on basis of total nodes Additional info: Concerns : Point is if you want to make "degrade" directly without perfornming any operation on node there is no point in waiting to display status. Maybe a user have no intention to make that node up so in that case nncp never have available status even if other nodes have a correct policy setup.
Blockers only: Moving to 4.10.1
Verified. OCP Version 4.10.6 kubernetes-nmstate-handler v4.10.1-2 Applied following nncp: apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: br-worker2 spec: nodeSelector: bz: 'yes' desiredState: interfaces: - name: br1 type: linux-bridge state: up bridge: options: stp: enabled: false port: - name: ens4 On two nodes labeled bz: 'yes' - n-adiz-410-8d2hr-worker-0-8th5h NotReady worker 7d2h v1.23.5+b0357ed n-adiz-410-8d2hr-worker-0-ff9wp Ready worker 7d2h v1.23.5+b0357ed [cnv-qe-jenkins@n-adiz-410-8d2hr-executor ~]$ oc get nnce NAME STATUS n-adiz-410-8d2hr-worker-0-ff9wp.br-worker2 Available [cnv-qe-jenkins@n-adiz-410-8d2hr-executor ~]$ oc get nncp NAME STATUS br-worker2 Available
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 4.10.1 Images security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:4668