Bug 2160942

Summary: Failed at DNS verification when applying a state with many veth to be activated
Product: Red Hat Enterprise Linux 9 Reporter: Mingyu Shi <mshi>
Component: nmstateAssignee: Gris Ge <fge>
Status: CLOSED ERRATA QA Contact: Mingyu Shi <mshi>
Severity: high Docs Contact:
Priority: medium    
Version: 9.2CC: ferferna, jiji, jishi, network-qe, sfaye, till
Target Milestone: rcKeywords: Regression, Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: nmstate-2.2.5-1.el9 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-05-09 07:31:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
reproducer.sh
none
nmstate+NMtrace.log none

Description Mingyu Shi 2023-01-14 15:36:55 UTC
Created attachment 1938074 [details]
reproducer.sh

Created attachment 1938074 [details]
reproducer.sh

Description of problem:
Please check the reproducer in the attachment.
Notice, to reproduce, you need to remove all veth mentioned in the attachment and run the reproducer to create them.

In the reproducer:
DNS + 1 bond + 34 veth(shoulde be "disconnected" in `nmcli dev`): always failed at the 1st time, if try again(without deleting all veth), it almost get pass

The other tries:
DNS + 1 bond + 2 veth(port): pass
1 bond + 34 veth: pass
change "type: ethernet" to "type: dummy": pass(with dummy, it even gets pass with more complicated desired state)



Version-Release number of selected component (if applicable):
nmstate-2.2.3-3.el9.x86_64
nispor-1.2.9-1.el9.x86_64
NetworkManager-1.41.7-2.el9.x86_64
openvswitch2.15-2.15.0-79.el9fdp.x86_64
Linux dell-per740-80.rhts.eng.pek2.redhat.com 5.14.0-231.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 9 15:03:12 EST 2023 x86_64 x86_64 x86_64 GNU/Linux
DISTRO=RHEL-9.2.0-20230111.33

How reproducible:
100% (at 1st time)

Steps to Reproduce:
Run the reproducer in the attachment

Actual results:
[2023-01-14T15:01:11Z INFO  nmstate::query_apply::net_state] Rollbacked to checkpoint /org/freedesktop/NetworkManager/Checkpoint/71
NmstateError: VerificationError: Failed to apply DNS config: desire name servers '10.68.5.26 2400:3200:baba::1', got '10.68.5.26'

Expected results:
No failure, DNS are applied to g0bond0

Additional info:


----------------------
Engineering notes
----------------------
Acceptance criteria : 
Given a nmstate state file with a DNS and a bond interface configuration as described in the reproducer (See attachment in the bz)
When the nmstate state file is applied and many veth are created
Then, the DNS are applied to the bond interface without any error.

Comment 1 Mingyu Shi 2023-01-14 15:40:33 UTC
Created attachment 1938075 [details]
nmstate+NMtrace.log

Comment 2 Fernando F. Mancera 2023-01-18 12:52:07 UTC
Hello,

I was able to reproduce it. We will add this to our backlog and work on it when we have capacity. Thanks for reporting!

Comment 3 Gris Ge 2023-01-19 07:33:49 UTC
Patch sent to upstream: https://github.com/nmstate/nmstate/pull/2201

Q: Why is it random failure?
A: Previous code is using `HashMap::iter()` to search valid interface to store the DNS. When it place bond before veth, the we see no problem, otherwise problem been triggered. That also explains why you need too many(34) veth to increase the chance of hitting the problem.

Q: Root cause?
A: The newly created veth2+ is holding IPv6 link local address which make it suitable for storing the IPv6 DNS server. But when the desire state has no IPv6 defined, nmstate forgot to copy IPv6 stack from current before applying DNS settings.

Q: What's the fix?
A: Just copy IP stack from current if not desired but marked to hold DNS info.

Q: Why use veth2+ instead of bond interface which also hold IPv6 info?
A: The posted patch changed this behavior, we first try `preferred` DNS interface, then we try `valid` DNS interface. The `perfered` here means desire interface hold desire IP and suit for DNS. Interface holding IPv6 link local address only is not preferred for IPv6 DNS.

Q: Any fix for the random problem?
A: The posted patch will use insert order of interface in desire state when searching DNS interface. If not found int desire state, we use sorted current interface name for DNS. This eliminate the random problem, so nmstate be consistent on choosing DNS interface.

Comment 6 Mingyu Shi 2023-01-28 08:20:55 UTC
Verified with:
nmstate-2.2.5-1.el9.x86_64
nispor-1.2.9-1.el9.x86_64
NetworkManager-1.41.90-1.el9.x86_64
DISTRO=RHEL-9.2.0-20230127.12

Comment 8 errata-xmlrpc 2023-05-09 07:31:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (nmstate bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:2190