After a normal boot, the system always comes up with first slave down. If you manually set slave iface up by doing 'ip link set dev up' it comes back online. If you restart the network service the problem reproduces and if you de-enslave eth0 (ifenslave -d bond0 eth0), eth1 goes down. Module options: install bond0 /sbin/modprobe bonding -o bond0 miimon=500 mode=4 lacp_rate=fast xmit_hash_policy=layer3+4 use_carrier=0 Changing use_carrier=1 (default) didn't help. There is a log with debug output enabled from bonding driver but I couldn't see anything odd there. The message logs shows the interface detected, then enslaved, link up and finally link down. Additional info: "I have the same configuration in a DL580 server where I don't have an issue. Also I don't have an issue with the BL680c (Full height blades) in the same enclosure. Only the half height blades (BL460c) in the enclosure have the issue. Both BL680c and BL460c share the same switches in the enclosure."
Created attachment 341962 [details] log of bonding with debug enabled
Created attachment 341965 [details] patch fixing link partner This patch was applied when bonding driver debug output was taken but it didn't help.
Marking this as a duplicate. This issue is resolved in RHEL4.7. *** This bug has been marked as a duplicate of bug 437865 ***
Can you please provide additional info? What machine does this issue occur at? Only BL460c? Do we have one suitable for internal testing? Does this issue occur also with RHEL5 and upstream kernel?
I want to also ask what appears on the output of "mii-tool" command and "ethtool eth2" after line "Mar 18 16:56:22 pmtraams02 kernel: bonding: bond0: link status definitely down for interface eth2, disabling it". Thanks
I tried to reproduce the problem on the only machine we have in rhts: hp-bl460c-02.rhts.bos.redhat.com. No luck :/
Closing as a dup of bz#432622 *** This bug has been marked as a duplicate of bug 432622 ***