Bug 1364476 - [RHV-H 4.0] bonding setup does not work if bond0 is used
Summary: [RHV-H 4.0] bonding setup does not work if bond0 is used
Keywords:
Status: CLOSED DUPLICATE of bug 1356635
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-node-ng
Version: 4.0.0
Hardware: All
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: Fabian Deutsch
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 1338732
TreeView+ depends on / blocked
 
Reported: 2016-08-05 12:51 UTC by Martin Tessun
Modified: 2017-02-06 23:15 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-05 13:38:48 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Martin Tessun 2016-08-05 12:51:39 UTC
Description of problem:
Setting up a bond0-device in anaconda for RHV-H 4.0 NGN results in the device not coming up at boot time

Version-Release number of selected component (if applicable):
RHV-H 4.0 NGN beta2

How reproducible:
always

Steps to Reproduce:
1. Setup RHV-H with a bond0 device and at least 1 slave
2. After reboot bond0 is down
3. nmcli con up bond0 does not bring the interface up

Actual results:
nmcli con up bond0 results in the following error:

Error: Connection acivation failed: Connection `bond0' is not available on device bond0 at this time.

Using "ip link set up dev bond0" gets the interface up and configured correctly

Expected results:
Device bond0 should get active even without that "hack"

Additional info:
This is a plain NGN bug, as it does work in RHEL 7.2 as expected.
The root cause for this is that bond0 is already available in any RHV-H image before network is configured. This leads to the above error.
Using e.g. bond1 instead gets everything up as expected:

[root@rhevh1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond1 state UP qlen 1000
    link/ether 52:54:00:61:27:02 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:e1:33:25 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond1 state UP qlen 1000
    link/ether 52:54:00:61:27:02 brd ff:ff:ff:ff:ff:ff
5: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN 
    link/ether ca:2f:47:5e:21:ef brd ff:ff:ff:ff:ff:ff
6: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 52:54:00:61:27:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.111.11/24 brd 192.168.111.255 scope global bond1
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe61:2702/64 scope link 
       valid_lft forever preferred_lft forever
[root@rhevh1 ~]# nmcli con show
NAME               UUID                                  TYPE            DEVICE 
bond1              43319ce2-0d10-48e2-8a49-9a9b43b92469  bond            bond1  
eth0               6d963fbf-fd0e-4381-9262-3bd9aa617485  802-3-ethernet  --     
eth2               5527472d-eae0-4973-9e24-cee6d2547319  802-3-ethernet  --     
bond1s1            35f2012a-5ed3-426a-af3f-e24553275df9  802-3-ethernet  eth0   
bond1s2            352c4770-3c11-4eac-acc0-6ecc42e8b44e  802-3-ethernet  eth2   
eth1               d1f638e6-8344-44c5-b3c8-a13cf346a926  802-3-ethernet  --     
[root@rhevh1 ~]#

Comment 1 Fabian Deutsch 2016-08-05 13:32:27 UTC
I wonder if this is similar to bug 1356635

Comment 2 Martin Tessun 2016-08-05 13:38:48 UTC
Hi Fabian,

reading https://bugzilla.redhat.com/show_bug.cgi?id=1356635#c20 and following I fully agree. This BZ describes exactly the same issue as also bond0 was used for the bonding.

Closing this as duplicate.

*** This bug has been marked as a duplicate of bug 1356635 ***


Note You need to log in before you can comment on or make changes to this bug.