Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
For bugs related to Red Hat Enterprise Linux 5 product line. The current stable release is 5.10. For Red Hat Enterprise Linux 6 and above, please visit Red Hat JIRA https://issues.redhat.com/secure/CreateIssue!default.jspa?pid=12332745 to report new issues.

Bug 524206

Summary: bonding mode on bond1 is set from bond0 mode instead of bond1 options
Product: Red Hat Enterprise Linux 5 Reporter: David111 <dlecorfec>
Component: kernelAssignee: Andy Gospodarek <agospoda>
Status: CLOSED NOTABUG QA Contact: Red Hat Kernel QE team <kernel-qe>
Severity: medium Docs Contact:
Priority: low    
Version: 5.4CC: ctac113, dzickus, peterm
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2009-09-25 20:55:59 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description David111 2009-09-18 11:56:44 UTC
Description of problem:
 Bonding mode on bond1 and bond2 are copied from bond0 bonding mode, instead of specified bond1 and bond2 mode option.

Version-Release number of selected component (if applicable):
 2.6.18-128.7.1.el5
 2.6.18-164.el5

How reproducible:
 Always

Steps to Reproduce:
1. Add in /etc/modprobe.conf :
 alias bond0 bonding
 options bond0 mode=1 miimon=80
 alias bond1 bonding
 options bond1 mode=6 miimon=80
 alias bond2 bonding
 options bond2 mode=6 miimon=80

2. Reboot
  
Actual results:
 # cat /proc/net/bonding/bond1
 Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)

 Bonding Mode: fault-tolerance (active-backup)
 ...

Expected results:
 # cat /proc/net/bonding/bond1
 Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)

 Bonding Mode: adaptive load balancing
 ...

Additional info:
 Tested on CentOS 5.3, happens at least with kernel r128 and r164 (from 5.4)

Comment 1 David111 2009-09-18 11:59:39 UTC
dmesg from the test case:

Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
bonding: MII link monitoring set to 80 ms
bonding: bond0: Adding slave eth0.
bnx2x: eth0: using MSI-X
bonding: bond0: enslaving eth0 as a backup interface with a down link.
bnx2x: eth0 NIC Link is Down
bonding: bond0: Adding slave eth1.
bnx2x: eth1: using MSI-X
bonding: bond0: enslaving eth1 as a backup interface with a down link.
bnx2x: eth0 NIC Link is Down
bnx2x: eth1 NIC Link is Down
bnx2x: eth0 NIC Link is Up, 10000 Mbps full duplex, receive & transmit flow control ON
bonding: bond0: link status definitely up for interface eth0.
bonding: bond0: making interface eth0 the new active one.
bonding: bond0: first active interface up!
bnx2x: eth1 NIC Link is Down
bnx2x: eth1 NIC Link is Up, 10000 Mbps full duplex, receive & transmit flow control ON
bonding: bond0: link status definitely up for interface eth1.
bonding: bond1 is being created...
bonding: bond1: Adding slave eth2.
bnx2x: eth2: using MSI-X
bnx2x: eth2 NIC Link is Up, 10000 Mbps full duplex, receive & transmit flow control ON
bonding: bond1: Warning: failed to get speed and duplex from eth2, assumed to be 100Mb/sec and Full.
bonding: bond1: making interface eth2 the new active one.
bonding: bond1: first active interface up!
bonding: bond1: enslaving eth2 as an active interface with an up link.
bonding: bond1: Adding slave eth3.
bnx2x: eth3: using MSI-X
bnx2x: eth3 NIC Link is Up, 10000 Mbps full duplex, receive & transmit flow control ON
bonding: bond1: Warning: failed to get speed and duplex from eth3, assumed to be 100Mb/sec and Full.
bonding: bond1: enslaving eth3 as a backup interface with an up link.
bonding: bond2 is being created...
bonding: bond2: Adding slave eth4.
bnx2x: eth4: using MSI-X
bnx2x: eth4 NIC Link is Up, 10000 Mbps full duplex, receive & transmit flow control ON
bonding: bond2: Warning: failed to get speed and duplex from eth4, assumed to be 100Mb/sec and Full.
bonding: bond2: making interface eth4 the new active one.
bonding: bond2: first active interface up!
bonding: bond2: enslaving eth4 as an active interface with an up link.
bonding: bond2: Adding slave eth5.
bnx2x: eth5: using MSI-X
bnx2x: eth5 NIC Link is Up, 10000 Mbps full duplex, receive & transmit flow control ON
bonding: bond2: Warning: failed to get speed and duplex from eth5, assumed to be 100Mb/sec and Full.
bonding: bond2: enslaving eth5 as a backup interface with an up link.

Comment 2 CTAC 2009-09-21 01:32:31 UTC
Have you read the docs ??


$grep BONDING_MODULE_OPTS /usr/share/doc/iputils-20020927/README.bonding 
BONDING_MODULE_OPTS="mode=active-backup miimon=100"

Now you should set up bonding with /etc/sysconfig/network-scripts/ifcfg-bond*, instead of /etc/modprobe.conf

$ grep BOND /etc/sysconfig/network-scripts/ifcfg-bond0 
BONDING_MODULE_OPTS="miimon=100 mode=active-backup use_carrier=1 primary=eth0"

Comment 3 David111 2009-09-21 08:44:47 UTC
Yes I've read the doc, but I've read http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/s3-modules-bonding-directives.html thus I've been bitten by the longstanding "Bonding methods have changed for RHEL 5, deployment guide needs to be updated." bug: https://bugzilla.redhat.com/show_bug.cgi?id=238660

How was I supposed to find the doc in such an obscure place as /usr/share/doc/iputils-20020927/README.bonding ? If at least there was a "man bonding" :)

Thank you for the pointer, though. I guess this ticket can be closed and/or linked to 238660 :)

Comment 4 David111 2009-09-21 10:24:18 UTC
Ah, ctac113, it's not BONDING_MODULE_OPTS but BONDING_OPTS in RHEL5 (I've tested both)

(thanks to the info in ticket 238660)

Comment 5 Andy Gospodarek 2009-09-25 20:55:59 UTC
I apologize for the confusion on the bonding configuration.  As stated at the bottom of:

http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/s3-modules-bonding-directives.html

this file can also be examined and since I actually can control it, I know the information has been correct since at least RHEL5.2:

/usr/share/doc/kernel-doc-<kernel-version>/Documentation/networking/bonding.txt

Just as a reference, the latest deployment guide does outline that BONDING_OPTS should be used in ifcfg-bondX.  Unfortunately it also states that those options can be used in /etc/modprobe.conf under #4 in section 41.5.2.

Just in case you are interested, here is the document I am discussing:

http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Deployment_Guide/s2-modules-bonding.html

Since it sounds like your issue has been resolved, I am going to close this bug, but please open it again if you are unable to get both bonding interfaces to work correctly on the latest kernel.