Bug 845525 - beta2 - VSDM is not reporting the BONDING_OPTS for bonds
beta2 - VSDM is not reporting the BONDING_OPTS for bonds
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm (Show other bugs)
6.4
x86_64 Linux
unspecified Severity high
: beta
: 6.4
Assigned To: Dan Kenigsberg
Martin Pavlik
network
: Regression
: 845804 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-08-03 06:13 EDT by Martin Pavlik
Modified: 2015-04-06 23:09 EDT (History)
16 users (show)

See Also:
Fixed In Version: v4.9.6-27.0
Doc Type: Bug Fix
Doc Text:
Previously, when you added a bridged logical network to a cluster and you created a bond in mode=1, and you attached the bridged logical network to the bond, Red Hat Enterprise Virtualization Manager displayed the wrong bond mode. This was due to VDSM failing to report the BONDING_OPTS for the bonds, which caused the backend to be unable to resolve the bond mode. When this happened, the default behavior was to set the bond mode to "custom", which was incorrect. Now, an update to VDSM makes sure that the bond mode is properly set.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-12-04 14:04:30 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
logs (413.42 KB, application/x-gzip)
2012-08-03 06:13 EDT, Martin Pavlik
no flags Details
screenshot 1 (191.56 KB, image/png)
2012-08-03 06:13 EDT, Martin Pavlik
no flags Details

  None (edit)
Description Martin Pavlik 2012-08-03 06:13:04 EDT
Created attachment 602101 [details]
logs

Description of problem:
if bond (tested with mode=1) is created via setup networks, bond gets created properly on host, but rhevm is diplaying mode custom:

Version-Release number of selected component (if applicable):
Red Hat Enterprise Virtualization Manager Version: '3.1.0-10.el6ev' 

How reproducible:
100%

Steps to Reproduce:
1. Add bridged logical network to cluster
2. Host -> your host -> network interfaces -> setup host networks -> create bond (mode=1) and attach network from step 1 to bond -> click OK
3. Host -> your host -> network interfaces -> setup host networks -> click pencil icon on bond (see screenshot1)
  
Actual results:
wrong bonf mode is diplayed

Expected results:
correct bond mode should be displayed

Additional info:

paramters are send correctly

MainProcess|Thread-5765::DEBUG::2012-08-03 11:58:41,945::configNetwork::1189::setupNetworks::(setupNetworks) Setting up network according to configuration: networks:{'NET1': {'bonding': 'bond1', 'STP': 'no', 'bridged': 'true'}}, bondings:{'bond1': {'nics': ['p1p1', 'p1p2'], 'options': 'mode=1 miimon=100'}}, options:{'connectivityCheck': 'true', 'connectivityTimeout': 120}


bond gets created properly

[root@dell-r210ii-07 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond1 
DEVICE=bond1
ONBOOT=yes
BONDING_OPTS='mode=1 miimon=100'
BRIDGE=NET1
NM_CONTROLLED=no
STP=no
[root@dell-r210ii-07 ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: load balancing (round-robin)
MII Status: down
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
[root@dell-r210ii-07 ~]# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: p1p1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: p1p1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:04:29:88
Slave queue ID: 0

Slave Interface: p1p2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 90:e2:ba:04:29:89
Slave queue ID: 0
Comment 1 Martin Pavlik 2012-08-03 06:13:49 EDT
Created attachment 602102 [details]
screenshot 1
Comment 3 Simon Grinberg 2012-08-06 04:14:06 EDT
*** Bug 845804 has been marked as a duplicate of this bug. ***
Comment 6 Moti Asayag 2012-08-06 05:21:00 EDT
Martin, can you add the output of 'vdsClient -s 0 getVdsCaps' to verify how VDSM report for that bond ?
Comment 7 Martin Pavlik 2012-08-06 06:08:58 EDT
vdsm-4.9.6-26.0.el6_3.x86_64

[root@dell-r210ii-07 ~]# vdsClient -s 0 getVdsCaps
	HBAInventory = {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:2bf48d38b7da'}], 'FC': []}
	ISCSIInitiatorName = iqn.1994-05.com.redhat:2bf48d38b7da
	bondings = {'bond4': {'addr': '', 'cfg': {'SLAVE': 'yes', 'ONBOOT': 'yes', 'NM_CONTROLLED': 'no', 'STP': 'no', 'HWADDR': '90:E2:BA:04:29:89', 'MASTER': 'bond4', 'DEVICE': 'p1p2'}, 'mtu': '1500', 'netmask': '', 'slaves': ['p1p1', 'p1p2'], 'hwaddr': '90:E2:BA:04:29:88'}, 'bond0': {'addr': '', 'cfg': {'SLAVE': 'yes', 'ONBOOT': 'yes', 'NM_CONTROLLED': 'no', 'STP': 'no', 'HWADDR': '90:E2:BA:04:29:89', 'MASTER': 'bond4', 'DEVICE': 'p1p2'}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond1': {'addr': '', 'cfg': {'SLAVE': 'yes', 'ONBOOT': 'yes', 'NM_CONTROLLED': 'no', 'STP': 'no', 'HWADDR': '90:E2:BA:04:29:89', 'MASTER': 'bond4', 'DEVICE': 'p1p2'}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond2': {'addr': '', 'cfg': {'SLAVE': 'yes', 'ONBOOT': 'yes', 'NM_CONTROLLED': 'no', 'STP': 'no', 'HWADDR': '90:E2:BA:04:29:89', 'MASTER': 'bond4', 'DEVICE': 'p1p2'}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {'SLAVE': 'yes', 'ONBOOT': 'yes', 'NM_CONTROLLED': 'no', 'STP': 'no', 'HWADDR': '90:E2:BA:04:29:89', 'MASTER': 'bond4', 'DEVICE': 'p1p2'}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}
	bridges = {'rhevm': {'addr': '10.34.66.71', 'cfg': {'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'rhevm', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['em1']}, 'NET1': {'addr': '', 'cfg': {'DELAY': '0', 'NM_CONTROLLED': 'no', 'STP': 'no', 'DEVICE': 'NET1', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ports': ['bond4']}}
	clusterLevels = ['3.0', '3.1']
	cpuCores = 4
	cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,x2apic,popcnt,aes,xsave,avx,lahf_lm,ida,arat,epb,xsaveopt,pln,pts,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_Penryn,model_Westmere,model_SandyBridge
	cpuModel = Intel(R) Xeon(R) CPU E31240 @ 3.30GHz
	cpuSockets = 1
	cpuSpeed = 3292.421
	emulatedMachines = ['rhel6.3.0', 'pc', 'rhel6.2.0', 'rhel6.1.0', 'rhel6.0.0', 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0']
	guestOverhead = 65
	hooks = {}
	kvmEnabled = true
	lastClient = 10.34.63.65
	lastClientIface = rhevm
	management_ip = 
	memSize = 7862
	netConfigDirty = True
	networks = {'rhevm': {'iface': 'rhevm', 'addr': '10.34.66.71', 'cfg': {'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'rhevm', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '10.34.66.254', 'ports': ['em1']}, 'NET1': {'iface': 'NET1', 'addr': '', 'cfg': {'DELAY': '0', 'NM_CONTROLLED': 'no', 'STP': 'no', 'DEVICE': 'NET1', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'stp': 'off', 'bridged': True, 'gateway': '0.0.0.0', 'ports': ['bond4']}}
	nics = {'p1p1': {'permhwaddr': '90:E2:BA:04:29:88', 'addr': '', 'cfg': {'SLAVE': 'yes', 'ONBOOT': 'yes', 'NM_CONTROLLED': 'no', 'STP': 'no', 'HWADDR': '90:E2:BA:04:29:88', 'MASTER': 'bond4', 'DEVICE': 'p1p1'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '90:E2:BA:04:29:88', 'speed': 1000}, 'p1p2': {'permhwaddr': '90:E2:BA:04:29:89', 'addr': '', 'cfg': {'SLAVE': 'yes', 'ONBOOT': 'yes', 'NM_CONTROLLED': 'no', 'STP': 'no', 'HWADDR': '90:E2:BA:04:29:89', 'MASTER': 'bond4', 'DEVICE': 'p1p2'}, 'mtu': '1500', 'netmask': '', 'hwaddr': '90:E2:BA:04:29:88', 'speed': 1000}, 'em1': {'addr': '', 'cfg': {'BRIDGE': 'rhevm', 'NM_CONTROLLED': 'no', 'DEVICE': 'em1', 'BOOTPROTO': 'dhcp', 'HWADDR': 'D0:67:E5:F0:82:44', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': 'D0:67:E5:F0:82:44', 'speed': 1000}, 'em2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'hwaddr': 'D0:67:E5:F0:82:45', 'speed': 0}}
	operatingSystem = {'release': '6.3.0.3.el6', 'version': '6Server', 'name': 'RHEL'}
	packages2 = {'kernel': {'release': '279.el6.x86_64', 'buildtime': 1339604676.0, 'version': '2.6.32'}, 'spice-server': {'release': '10.el6', 'buildtime': '1337611492', 'version': '0.10.1'}, 'vdsm': {'release': '26.0.el6_3', 'buildtime': '1343838445', 'version': '4.9.6'}, 'qemu-kvm': {'release': '2.298.el6_3', 'buildtime': '1343061272', 'version': '0.12.1.2'}, 'libvirt': {'release': '21.el6_3.3', 'buildtime': '1341975208', 'version': '0.9.10'}, 'qemu-img': {'release': '2.298.el6_3', 'buildtime': '1343061272', 'version': '0.12.1.2'}}
	reservedMem = 321
	software_revision = 26.0
	software_version = 4.9
	supportedProtocols = ['2.2', '2.3']
	supportedRHEVMs = ['3.0', '3.1']
	uuid = 4C4C4544-0037-5410-8033-C3C04F39354A_90:E2:BA:04:29:88
	version_name = Snow Man
	vlans = {}
	vmTypes = ['kvm']
Comment 8 Moti Asayag 2012-08-06 06:26:59 EDT
VSDM is not reporting the BONDING_OPTS for the bonds, therefore the backend is not capable to resolve the bond mode and sets its mode to 'custom' with no value.

Assigning the bug to VDSM component.
Comment 11 Dan Kenigsberg 2012-08-06 09:51:44 EDT
argh, broken merger fixed in

http://gerrit.usersys.redhat.com/1381
Comment 15 errata-xmlrpc 2012-12-04 14:04:30 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2012-1508.html

Note You need to log in before you can comment on or make changes to this bug.