Description of problem: "Bond Active Slave" parameter on RHV-M GUI is showing an incorrect value in Active-Backup mode. Version-Release number of selected component (if applicable): RHV 4.2.8 How reproducible: 1. On a RHV host "Host1", the NICs "eno1" and "eno49" are bonded together as "bond0" in Active-Backup mode. 2) Unplug the network cable from the NIC ports to test the redundancy. 3) Before unplugging, both "eno1" and "eno49" shows up, Bond Active Slave is shown as "eno49" on both RHV-M and hypervisor command-line. 4) After unplugging "eno49", 'eno1' status is up and eno49 status is "down" on both RHV-M GUI and cli of the host. Bond Active Slave is (successfully) changed to "eno1" on host cli whereas the RHV-M still showing "eno49" as Bond Active Slave". 5) Even after 'ovirt-engine' service restart, the results are same. Workaround: - In RHV, "Refresh Capabilities" option on the host corrects the status of an active slave. >>> Select host > Right click > Select Management > Refresh Capabilities.
possible duplicate of bug 999947
Document the workaround for 4.3.
Hi Chetan, Thanks for reporting. Can you please attach the kcs for the bug?
Hello Marina, KCS has been created and attached to this bug. -- ChetanN
Bug 1240719 about a similar issue was closed WONTFIX.
What happens if the bond moved to a bad state after reported as good? E.g. because the config of the switch is messed up?
Verified on - rhvm-4.4.1-0.1.el8ev.noarch with nmstate-0.2.6-13.el8_2.noarch vdsm-4.40.17-1.el8ev.x86_64 NetworkManager-1.22.8-4.el8.x86_64
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:3247