Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1643512

Summary: No exclamation icon when bond with an LACP is misconfigured
Product: Red Hat Enterprise Virtualization Manager Reporter: Olimp Bockowski <obockows>
Component: ovirt-engineAssignee: Marcin Mirecki <mmirecki>
Status: CLOSED ERRATA QA Contact: Roni <reliezer>
Severity: high Docs Contact:
Priority: high    
Version: 4.0.7CC: danken, mburman, mkalinin, mmirecki, mtessun, obockows, reliezer, Rhev-m-bugs, sborella
Target Milestone: ovirt-4.3.0Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: ovirt-engine-4.3.0_rc Doc Type: No Doc Update
Doc Text:
This release ensures that red exclamation point appears when a bond is misconfigured.
Story Points: ---
Clone Of:
: 1661876 (view as bug list) Environment:
Last Closed: 2019-05-08 12:38:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Network RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1661876    

Description Olimp Bockowski 2018-10-26 12:23:29 UTC
Description of problem:
It is expected that "webadmin: display invalid partner mac alert also when partner mac is 00:00:00:00:00:00" - according to commit 485ae1cb9a75fdf13469389463058ea25d29e244

The bond configuration was misconfigured internationally for the test purpose, we had bond0 with 2 NICs, running:

cat ./bonding/bond0  | grep Aggregator

Active Aggregator Info:
	Aggregator ID: 4
Aggregator ID: 4
Aggregator ID: 3

one of NICs had wrong settings, e.g.
details partner lacp pdu:
    system priority: 65535
    system mac address: 00:00:00:00:00:00

The invalid partner mac alert on mode 4 bonds (exclamation mark next to bond name) should be displayed, but it wasn't there.

Version-Release number of selected component (if applicable):
4.1.z

How reproducible:
always

Steps to Reproduce:

2 ports on switch: Ethernet 1/1 and Ethernet 1/2
The interface Ethernet 1/1 has the line "channel-group 5 mode active" which means that it was configured to be part of the port-channel5 regardless the interface Ethernet 1/2.
Intentionally the Ethernet 1/2 not added to the port-channel5 .

Actual results:

no alert, even if LACP is misconfigured

Expected results:

The red exclamation mark next to a bond name appears.

Additional info:
Change: https://gerrit.ovirt.org/#/c/59062/
It was part of [RFE] "Engine should warn admin about bad 802.3ad status" 
https://bugzilla.redhat.com/show_bug.cgi?id=1281666

Comment 3 Dan Kenigsberg 2018-10-31 08:45:14 UTC
Please include vdsm and Engine logs to better understand where the bug is. Which switch do you have to reproduce it "always"? Burman tells me that it does not reproduce on his rhv-4.1.11

Comment 4 Olimp Bockowski 2018-11-01 14:57:05 UTC
I am really sorry, the version is:
ovirt-engine-4.0.7.4-0.1.el7ev.noarch

switch - I will have to check with a customer. All information I am going to attach as private due to the fact there are customer's env details

Comment 8 Dan Kenigsberg 2018-11-02 13:44:03 UTC
I don't see the output of getCaps in the logs you attached, nor the version of vdsm.

Comment 9 Olimp Bockowski 2018-11-07 11:41:41 UTC
@Dan - I am sorry I didn't open it for the latest version, but I can't try to reproduce due to lack of hardware resources, so I ask a customer to make tests and provide logs. 
The network device is: Cisco Nexus 3500
vdsm version it is not in any vdsm log files due to the logrotate, but the package is vdsm-4.19.24-1.el7ev.x86_64
As you can see there is just INFO about the success of getCapabilities, like:

 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getCapabilities succeeded in 0.15 seconds 

so I need a DEBUG or a hook in /usr/libexec/vdsm/hooks/after_get_caps/
I am going to ask a customer to run the hook and provide you what is needed
I am leaving NEEDINFO on me.

Comment 12 Olimp Bockowski 2018-11-08 13:43:31 UTC
the excerpt from getCapabilities:

    "bond0": {
      "ipv6autoconf": false,
      "ipv4addrs": [
        
      ],
      "ipv4defaultroute": false,
      "ipv6addrs": [
        
      ],
      "switch": "legacy",
      "active_slave": "",
      "ad_aggregator_id": "4",
      "dhcpv4": false,
      "netmask": "",
      "dhcpv6": false,
      "ad_partner_mac": "38:0e:4d:1d:6b:fc",
      "hwaddr": "90:e2:ba:f0:a1:80",
      "slaves": [
        "ens14f0",
        "ens1f0"
      ],
      "mtu": "1500",
      "ipv6gateway": "::",
      "gateway": "",
      "opts": {
        "miimon": "100",
        "mode": "4"
      },
      "addr": ""
    },

NICs
    "ens14f0": {
      "permhwaddr": "90:e2:ba:f0:a1:80",
      "ipv6autoconf": false,
      "addr": "",
      "ipv4defaultroute": false,
      "speed": 10000,
      "ipv6addrs": [
        
      ],
      "ad_aggregator_id": "4",
      "dhcpv4": false,
      "netmask": "",
      "dhcpv6": false,
      "ipv4addrs": [
        
      ],
      "hwaddr": "90:e2:ba:f0:a1:80",
      "mtu": "1500",
      "ipv6gateway": "::",
      "gateway": ""
    },

    "ens1f0": {
      "permhwaddr": "90:e2:ba:f0:a1:6c",
      "ipv6autoconf": false,
      "addr": "",
      "ipv4defaultroute": false,
      "speed": 10000,
      "ipv6addrs": [
        
      ],
      "ad_aggregator_id": "3",
      "dhcpv4": false,
      "netmask": "",
      "dhcpv6": false,
      "ipv4addrs": [
        
      ],
      "hwaddr": "90:e2:ba:f0:a1:80",
      "mtu": "1500",
      "ipv6gateway": "::",
      "gateway": ""
    },

so different aggregators, but I dont see details partner lacp pdu, what is in /proc/net/bonding/bond0  as:
Slave Interface: ens14f0
details partner lacp pdu:
    system priority: 24576
    system mac address: 38:0e:4d:1d:6b:fc
Slave Interface: ens1f0
details partner lacp pdu:
    system priority: 65535
    system mac address: 00:00:00:00:00:00

Comment 13 Dan Kenigsberg 2018-11-13 11:18:30 UTC
Marcin, can you check why Engine does not warn about different agg_ids ?

Comment 15 Marcin Mirecki 2018-11-28 16:19:29 UTC
1. Not reporting invalid ad_aggregator_id's
This is indeed a bug. I will add a fix for this asap.

2. Invalid ad partner mac.
The ad partner mac in vdsm is collected only by looking at the bond.
We look at: /sys/class/net/bond0/bonding/ad_partner_mac
Note that in the attached excerpt from getCapabilities, the partner mac is correct: "ad_partner_mac": "38:0e:4d:1d:6b:fc", and this is what is used by the engine to decide if it's valid.

Comment 18 Dan Kenigsberg 2018-12-23 06:41:38 UTC
Marcin is away, but I believe that you can emulate the condition with

diff --git a/lib/vdsm/network/netinfo/bonding.py b/lib/vdsm/network/netinfo/bonding.py
index 32d3bfea0..c10610720 100644
--- a/lib/vdsm/network/netinfo/bonding.py
+++ b/lib/vdsm/network/netinfo/bonding.py
@@ -56,6 +56,7 @@ def _file_value(path):
 def get_bond_slave_agg_info(nic_name):
     agg_id_path = BONDING_SLAVE_OPT % (nic_name, 'ad_aggregator_id')
     agg_id = _file_value(agg_id_path)
+    agg_id = '42' if nic_name == 'name-of-a-bond-slave'
     return {'ad_aggregator_id': agg_id} if agg_id else {}

Comment 20 Roni 2019-01-15 15:48:03 UTC
In v4.3.0-0.6.alpha2.el7 the exclamation mark icon first appears when creating the Bond 
but disappear after restarting vdsmd service.
It does not appear again after clicking on Refresh Capabilities

Comment 22 Michael Burman 2019-01-16 14:06:17 UTC
The bug has failed qa on latest rhvm-4.2.8.2-0.1.el7ev.noarch

Comment 23 Michael Burman 2019-01-16 14:14:50 UTC
(In reply to Michael Burman from comment #22)
> The bug has failed qa on latest rhvm-4.2.8.2-0.1.el7ev.noarch

Ignore this comment

Comment 24 Michael Burman 2019-01-16 14:15:18 UTC
Roni, on which version you tested this bug?

Comment 25 Roni 2019-01-16 16:55:35 UTC
verified at: 4.3.0-0.8.master

Comment 26 Roni 2019-01-20 13:40:52 UTC
verified d/s 4.3.0-0.8.rc2.el7

Comment 28 errata-xmlrpc 2019-05-08 12:38:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:1085