Bug 1661876 - [downstream clone - 4.2.8] No exclamation icon when bond with an LACP is misconfigured
Summary: [downstream clone - 4.2.8] No exclamation icon when bond with an LACP is misc...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.0.7
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: ovirt-4.2.8
: ---
Assignee: Marcin Mirecki
QA Contact: Roni
URL:
Whiteboard:
Depends On: 1643512
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-24 07:38 UTC by RHV bug bot
Modified: 2022-03-13 16:36 UTC (History)
7 users (show)

Fixed In Version: ovirt-engine-4.2.8.2
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of: 1643512
Environment:
Last Closed: 2019-01-22 12:44:51 UTC
oVirt Team: Network
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3723211 0 None None None 2018-12-24 07:39:01 UTC
Red Hat Product Errata RHBA-2019:0121 0 None None None 2019-01-22 12:44:57 UTC
oVirt gerrit 95832 0 master MERGED Bond panel should display exclamation mark when aggregator id's are invalid. 2018-12-24 07:39:01 UTC
oVirt gerrit 96118 0 ovirt-engine-4.2 MERGED Bond panel should display exclamation mark when aggregator id's are invalid. 2019-01-07 12:30:01 UTC

Description RHV bug bot 2018-12-24 07:38:27 UTC
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1643512 +++
======================================================================

Description of problem:
It is expected that "webadmin: display invalid partner mac alert also when partner mac is 00:00:00:00:00:00" - according to commit 485ae1cb9a75fdf13469389463058ea25d29e244

The bond configuration was misconfigured internationally for the test purpose, we had bond0 with 2 NICs, running:

cat ./bonding/bond0  | grep Aggregator

Active Aggregator Info:
	Aggregator ID: 4
Aggregator ID: 4
Aggregator ID: 3

one of NICs had wrong settings, e.g.
details partner lacp pdu:
    system priority: 65535
    system mac address: 00:00:00:00:00:00

The invalid partner mac alert on mode 4 bonds (exclamation mark next to bond name) should be displayed, but it wasn't there.

Version-Release number of selected component (if applicable):
4.1.z

How reproducible:
always

Steps to Reproduce:

2 ports on switch: Ethernet 1/1 and Ethernet 1/2
The interface Ethernet 1/1 has the line "channel-group 5 mode active" which means that it was configured to be part of the port-channel5 regardless the interface Ethernet 1/2.
Intentionally the Ethernet 1/2 not added to the port-channel5 .

Actual results:

no alert, even if LACP is misconfigured

Expected results:

The red exclamation mark next to a bond name appears.

Additional info:
Change: https://gerrit.ovirt.org/#/c/59062/
It was part of [RFE] "Engine should warn admin about bad 802.3ad status" 
https://bugzilla.redhat.com/show_bug.cgi?id=1281666

(Originally by Olimp Bockowski)

Comment 3 RHV bug bot 2018-12-24 07:38:32 UTC
Please include vdsm and Engine logs to better understand where the bug is. Which switch do you have to reproduce it "always"? Burman tells me that it does not reproduce on his rhv-4.1.11

(Originally by danken)

Comment 4 RHV bug bot 2018-12-24 07:38:33 UTC
I am really sorry, the version is:
ovirt-engine-4.0.7.4-0.1.el7ev.noarch

switch - I will have to check with a customer. All information I am going to attach as private due to the fact there are customer's env details

(Originally by Olimp Bockowski)

Comment 8 RHV bug bot 2018-12-24 07:38:39 UTC
I don't see the output of getCaps in the logs you attached, nor the version of vdsm.

(Originally by danken)

Comment 9 RHV bug bot 2018-12-24 07:38:41 UTC
@Dan - I am sorry I didn't open it for the latest version, but I can't try to reproduce due to lack of hardware resources, so I ask a customer to make tests and provide logs. 
The network device is: Cisco Nexus 3500
vdsm version it is not in any vdsm log files due to the logrotate, but the package is vdsm-4.19.24-1.el7ev.x86_64
As you can see there is just INFO about the success of getCapabilities, like:

 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getCapabilities succeeded in 0.15 seconds 

so I need a DEBUG or a hook in /usr/libexec/vdsm/hooks/after_get_caps/
I am going to ask a customer to run the hook and provide you what is needed
I am leaving NEEDINFO on me.

(Originally by Olimp Bockowski)

Comment 12 RHV bug bot 2018-12-24 07:38:45 UTC
the excerpt from getCapabilities:

    "bond0": {
      "ipv6autoconf": false,
      "ipv4addrs": [
        
      ],
      "ipv4defaultroute": false,
      "ipv6addrs": [
        
      ],
      "switch": "legacy",
      "active_slave": "",
      "ad_aggregator_id": "4",
      "dhcpv4": false,
      "netmask": "",
      "dhcpv6": false,
      "ad_partner_mac": "38:0e:4d:1d:6b:fc",
      "hwaddr": "90:e2:ba:f0:a1:80",
      "slaves": [
        "ens14f0",
        "ens1f0"
      ],
      "mtu": "1500",
      "ipv6gateway": "::",
      "gateway": "",
      "opts": {
        "miimon": "100",
        "mode": "4"
      },
      "addr": ""
    },

NICs
    "ens14f0": {
      "permhwaddr": "90:e2:ba:f0:a1:80",
      "ipv6autoconf": false,
      "addr": "",
      "ipv4defaultroute": false,
      "speed": 10000,
      "ipv6addrs": [
        
      ],
      "ad_aggregator_id": "4",
      "dhcpv4": false,
      "netmask": "",
      "dhcpv6": false,
      "ipv4addrs": [
        
      ],
      "hwaddr": "90:e2:ba:f0:a1:80",
      "mtu": "1500",
      "ipv6gateway": "::",
      "gateway": ""
    },

    "ens1f0": {
      "permhwaddr": "90:e2:ba:f0:a1:6c",
      "ipv6autoconf": false,
      "addr": "",
      "ipv4defaultroute": false,
      "speed": 10000,
      "ipv6addrs": [
        
      ],
      "ad_aggregator_id": "3",
      "dhcpv4": false,
      "netmask": "",
      "dhcpv6": false,
      "ipv4addrs": [
        
      ],
      "hwaddr": "90:e2:ba:f0:a1:80",
      "mtu": "1500",
      "ipv6gateway": "::",
      "gateway": ""
    },

so different aggregators, but I dont see details partner lacp pdu, what is in /proc/net/bonding/bond0  as:
Slave Interface: ens14f0
details partner lacp pdu:
    system priority: 24576
    system mac address: 38:0e:4d:1d:6b:fc
Slave Interface: ens1f0
details partner lacp pdu:
    system priority: 65535
    system mac address: 00:00:00:00:00:00

(Originally by Olimp Bockowski)

Comment 13 RHV bug bot 2018-12-24 07:38:47 UTC
Marcin, can you check why Engine does not warn about different agg_ids ?

(Originally by danken)

Comment 15 RHV bug bot 2018-12-24 07:38:50 UTC
1. Not reporting invalid ad_aggregator_id's
This is indeed a bug. I will add a fix for this asap.

2. Invalid ad partner mac.
The ad partner mac in vdsm is collected only by looking at the bond.
We look at: /sys/class/net/bond0/bonding/ad_partner_mac
Note that in the attached excerpt from getCapabilities, the partner mac is correct: "ad_partner_mac": "38:0e:4d:1d:6b:fc", and this is what is used by the engine to decide if it's valid.

(Originally by Marcin Mirecki)

Comment 18 RHV bug bot 2018-12-24 07:38:55 UTC
Marcin is away, but I believe that you can emulate the condition with

diff --git a/lib/vdsm/network/netinfo/bonding.py b/lib/vdsm/network/netinfo/bonding.py
index 32d3bfea0..c10610720 100644
--- a/lib/vdsm/network/netinfo/bonding.py
+++ b/lib/vdsm/network/netinfo/bonding.py
@@ -56,6 +56,7 @@ def _file_value(path):
 def get_bond_slave_agg_info(nic_name):
     agg_id_path = BONDING_SLAVE_OPT % (nic_name, 'ad_aggregator_id')
     agg_id = _file_value(agg_id_path)
+    agg_id = '42' if nic_name == 'name-of-a-bond-slave'
     return {'ad_aggregator_id': agg_id} if agg_id else {}

(Originally by danken)

Comment 23 Roni 2019-01-16 16:57:46 UTC
Verified at: 4.2.8.2-0.1.el7ev

Comment 25 errata-xmlrpc 2019-01-22 12:44:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0121


Note You need to log in before you can comment on or make changes to this bug.