Bug 1800792 - NodeNeworkConfigurationPolicy not applied to workers that were down during initial application of the policy
Summary: NodeNeworkConfigurationPolicy not applied to workers that were down during in...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Networking
Version: 2.3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 2.3.0
Assignee: Quique Llorente
QA Contact: Meni Yakove
URL:
Whiteboard:
Depends On:
Blocks: 1771572
TreeView+ depends on / blocked
 
Reported: 2020-02-07 21:24 UTC by William Caban
Modified: 2023-09-14 05:52 UTC (History)
7 users (show)

Fixed In Version: kubernetes-nmstate-handler-container-v2.3.0-21
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-04 19:10:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:2011 0 None None None 2020-05-04 19:10:49 UTC

Description William Caban 2020-02-07 21:24:57 UTC
Description of problem:

When applying a `NodeNeworkConfigurationPolicy` to workers (e.g. creating a VLAN interface), if a worker is down during the application of the policy, after it comes up, the worker never gets the new NM configurations so the VLAN or bridge is never created.

If anything is updated in the Policy, for example, updating the description and re-applying to the workers, then it will be applied to all the workers online, even the ones that never got the configuration before.


How reproducible:

Always


Steps to Reproduce:
1. Install CNV Operator
2. Create a `NetworkConfigurationPolicy` creating a VLAN
3. Shutdown a worker
4. Apply policy & confirm new interface was created in the workers online
5. Bring up the worker that was shutdown
6. The new VLAN interface is never created

Expected results:

There should be a remediation cycle in the operator that validates the configuration is as expected.

Comment 1 Petr Horáček 2020-02-08 23:31:36 UTC
This should be tackled in the current master on U/S that will be shipped in CNV 2.3. We will verify whether it is fixed of course.

Comment 2 Dan Kenigsberg 2020-02-09 05:18:10 UTC
(In reply to Petr Horáček from comment #1)
> This should be tackled in the current master on U/S that will be shipped in
> CNV 2.3. We will verify whether it is fixed of course.

Based on the above, souldn't this bug move to MODIFIED or even ON_QA?

Comment 3 Quique Llorente 2020-02-10 13:23:30 UTC
Hi,

   So I have try this manually with kubernetes-nmstate u/s master and ocp 4.4 and it's working fine, so we may need to upgrade CNV 2.2 version.

Comment 4 Quique Llorente 2020-02-10 14:29:43 UTC
Also testing with cnv-2.2 kubernetes-nmstate version (0.13.0) is working fine

@William Caban can you attach also the output of following commands ?

oc get nncp -o yaml
oc get nnce -o yaml
oc logs -n openshift-cnv -l app=kubernetes-nmstate

Comment 5 William Caban 2020-02-10 21:04:19 UTC
This report is from OCP 4.3 w/CNV 2.2. We will reproduce and report.

Comment 6 Quique Llorente 2020-02-11 12:51:40 UTC
I think I have being able to reproduce it with kubevirtci provvider ocp-4.3 and upstream cluster-network-addons-operator 0.23.0

After shutting down the worker and applying a vlan policy and starting it up again, the worker does not show any sign of applying the policy itself.

Also I see an error at master trying to re-apply the vlan (and nmstate failing, I susspect maybe is not idempotent there).

Let's wait for your report so we check if we see the same issue.


Also upgraded CNAO to 0.26.0 and everyting works fine.

Comment 7 Quique Llorente 2020-02-11 14:03:28 UTC
ALso tested with CNAO 0.25.0 (That's the CNV 2.3 version) everything works fine, looks like this bug is fixed at CNV 2.3.

Comment 9 Meni Yakove 2020-02-27 15:26:01 UTC
operatorVersion: v0.26.1

Comment 12 errata-xmlrpc 2020-05-04 19:10:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:2011

Comment 13 Red Hat Bugzilla 2023-09-14 05:52:08 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.