Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Created attachment 1938074[details]
reproducer.sh
Created attachment 1938074[details]
reproducer.sh
Description of problem:
Please check the reproducer in the attachment.
Notice, to reproduce, you need to remove all veth mentioned in the attachment and run the reproducer to create them.
In the reproducer:
DNS + 1 bond + 34 veth(shoulde be "disconnected" in `nmcli dev`): always failed at the 1st time, if try again(without deleting all veth), it almost get pass
The other tries:
DNS + 1 bond + 2 veth(port): pass
1 bond + 34 veth: pass
change "type: ethernet" to "type: dummy": pass(with dummy, it even gets pass with more complicated desired state)
Version-Release number of selected component (if applicable):
nmstate-2.2.3-3.el9.x86_64
nispor-1.2.9-1.el9.x86_64
NetworkManager-1.41.7-2.el9.x86_64
openvswitch2.15-2.15.0-79.el9fdp.x86_64
Linux dell-per740-80.rhts.eng.pek2.redhat.com 5.14.0-231.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 9 15:03:12 EST 2023 x86_64 x86_64 x86_64 GNU/Linux
DISTRO=RHEL-9.2.0-20230111.33
How reproducible:
100% (at 1st time)
Steps to Reproduce:
Run the reproducer in the attachment
Actual results:
[2023-01-14T15:01:11Z INFO nmstate::query_apply::net_state] Rollbacked to checkpoint /org/freedesktop/NetworkManager/Checkpoint/71
NmstateError: VerificationError: Failed to apply DNS config: desire name servers '10.68.5.26 2400:3200:baba::1', got '10.68.5.26'
Expected results:
No failure, DNS are applied to g0bond0
Additional info:
----------------------
Engineering notes
----------------------
Acceptance criteria :
Given a nmstate state file with a DNS and a bond interface configuration as described in the reproducer (See attachment in the bz)
When the nmstate state file is applied and many veth are created
Then, the DNS are applied to the bond interface without any error.
Patch sent to upstream: https://github.com/nmstate/nmstate/pull/2201
Q: Why is it random failure?
A: Previous code is using `HashMap::iter()` to search valid interface to store the DNS. When it place bond before veth, the we see no problem, otherwise problem been triggered. That also explains why you need too many(34) veth to increase the chance of hitting the problem.
Q: Root cause?
A: The newly created veth2+ is holding IPv6 link local address which make it suitable for storing the IPv6 DNS server. But when the desire state has no IPv6 defined, nmstate forgot to copy IPv6 stack from current before applying DNS settings.
Q: What's the fix?
A: Just copy IP stack from current if not desired but marked to hold DNS info.
Q: Why use veth2+ instead of bond interface which also hold IPv6 info?
A: The posted patch changed this behavior, we first try `preferred` DNS interface, then we try `valid` DNS interface. The `perfered` here means desire interface hold desire IP and suit for DNS. Interface holding IPv6 link local address only is not preferred for IPv6 DNS.
Q: Any fix for the random problem?
A: The posted patch will use insert order of interface in desire state when searching DNS interface. If not found int desire state, we use sorted current interface name for DNS. This eliminate the random problem, so nmstate be consistent on choosing DNS interface.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (nmstate bug fix and enhancement update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2023:2190