Bug 1196038 - [RFE] hosted-engine setup is unable to use bonded vlan interface as "nic to set rhevm bridge on"
Summary: [RFE] hosted-engine setup is unable to use bonded vlan interface as "nic to s...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-hosted-engine-setup
Version: unspecified
Hardware: All
OS: Linux
high
medium
Target Milestone: ovirt-3.6.3
: 3.6.3
Assignee: Simone Tiraboschi
QA Contact: Michael Burman
URL:
Whiteboard: integration
Depends On: 1134346
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-02-25 07:12 UTC by Anand Nande
Modified: 2019-07-11 08:41 UTC (History)
26 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
With this update, hosted-engine-setup is able to deploy the Red Hat Enterprise Virtualization Manager bridge using a bonded VLAN interface.
Clone Of: 1134346
Environment:
Last Closed: 2016-03-09 19:08:08 UTC
oVirt Team: Integration
Target Upstream Version:
Embargoed:
sherold: Triaged+


Attachments (Terms of Use)
record vlan over bond (3.80 MB, application/x-gzip)
2016-02-03 09:30 UTC, Michael Burman
no flags Details
answers conf (1.34 KB, application/x-gzip)
2016-02-03 13:17 UTC, Michael Burman
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1303931 1 None None None 2021-01-20 06:05:38 UTC
Red Hat Knowledge Base (Solution) 1417783 0 None None None Never
Red Hat Product Errata RHEA-2016:0375 0 normal SHIPPED_LIVE ovirt-hosted-engine-setup bug fix and enhancement update 2016-03-09 23:48:34 UTC

Internal Links: 1303931

Description Anand Nande 2015-02-25 07:12:37 UTC
+++ This bug was initially created as a clone of Bug #1134346 +++

Description of problem:
Can't pass vlan on bonded interface as nic of the ovirtmgmt bridge 

Version-Release number of selected component (if applicable):
- Rpm installation on a CentOS 6.5 x86_64
  oVirt-release35.noarch  001-0.4.rc1     @/ovirt-release35
- Engine installed on a CentOS 6.5 x86_64 minimal VM 

How reproducible:


Steps to Reproduce:
1. Have VLANs on bonded interfaces
2. Follow Getting Started Guide for the installation
3. try to pass one of the VLANs on the bond for the ovirtmgmt bridge creation

Actual results:
[...]
         --== NETWORK CONFIGURATION ==--
         
          Please indicate a nic to set ovirtmgmt bridge on: (bond0.3070, bond0, bond0.3130, bond0.50, bond0.51, em1, em2) [em1]:  bond0.3130
[ ERROR ] Invalid value


Expected results:
It works.

Additional info:

ifcfg-bond0.3130
=================
DEVICE=bond0.3130
USERCTL=no
ONBOOT=yes
BOOTPROTO=none
VLAN=yes
TYPE=Ethernet
NM_CONTROLLED=no
STP=no
=================

--- Additional comment from Doron Fediuck on 2014-09-01 03:18:25 EDT ---

VLAN over bond is unsupported yet.
Going forward we'd like to close this gap.
Patches are more than welcomed :)

Comment 1 Doron Fediuck 2015-02-26 08:24:26 UTC
Lior,
does the current vdsm API support such a topology?
Hosted Engine installer is using VDSM to generate the needed network, 
so we need this infrastructure in place before we can add support for it in
the installer.

Comment 2 Lior Vernia 2015-02-26 08:46:04 UTC
Definitely supports it as it's possible via the engine - Dan, what would be the exact command to run?

Comment 3 Dan Kenigsberg 2015-02-26 10:33:03 UTC
As documented in 
https://gerrit.ovirt.org/gitweb?p=vdsm.git;a=blob;f=vdsm/network/api.py;h=2706cfe2b008edb87ebb1fa5f9918b9cfee2463c;hb=HEAD#l767

setupNetwork({'ovirtmgmt': {'bonding': bondname, 'vlan'=vlanid ...}}, {}, {'connectivityCheck': False})

The main issue here is that a bonding device has to be passed as 'bonding'; a single nic is passed as 'nic'. Note that in a single command, you may also create the requested bond:

setupNetwork({'ovirtmgmt': {'bonding': bondname, 'vlan'=vlanid ...}}, {'bondname': {'nics': [list-of-nics], 'opts': [list-of-opts]}}, {'connectivityCheck': False})

Comment 4 Anand Nande 2015-02-26 15:02:55 UTC
Hi Dan,

So for a vlan tagged bond interface (bond0.1210), Can we directly run the following on the host (confirming the syntax here):

# vdsClient setupNetwork({'rhevm': {'bonding': bond0, 'vlan'=1210 ...}}, {'bondname': {'nics': [eth0,eth1]}}

Will this create the respective ifcfg files and set the bond onboot = yes?

IHAC who's stuck at this phase as they chose the hosted-engine method 
and this is not working out for them - post the 3.5 upgrade.

Let me know your thoughts on this.

Cheers
Anand

Comment 5 Dan Kenigsberg 2015-02-27 12:45:59 UTC
I suggest not to use the vdsClient utility. It is bloated and ugly. importing vdscli from vdsm and calling the API directly is very much preferred (as was my previous example). If you must use the command line utility, try something like

vdsClient -s 0 setupNetworks networks='{rhevm:{bonding:bond0,bridged:True,vlan:1210}}' bondings='{bond0:{nics=eth0,eth1}}'

For the management network, it keeps ONBOOT=yes even on 3.5.

However, even ONBOOT=no networks should be take up by vdsm on its boot. The fact that this is not always the case is a serious bug 1194068 that must be understood and solved.

Comment 10 Michael Burman 2016-02-03 09:28:36 UTC
I'm sorry, but this is working as expected over a vlan tagged bond(see video attachment). Management network was successfully configured over the vlan tagged bond. 

Tested on -->
- Red Hat Enterprise Linux Server release 7.2 (Maipo)
- vdsm-4.17.19-0.el7ev.noarch
- ovirt-hosted-engine-setup-1.3.2.3-1.el7ev.noarch

          Please confirm installation settings (Yes, No)[Yes]:
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Configuring libvirt
[ INFO  ] Configuring VDSM
[ INFO  ] Starting vdsmd
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Configuring the management bridge
[ INFO  ] Creating Storage Domain
[ INFO  ] Creating Storage Pool
[ INFO  ] Connecting Storage Pool
[ INFO  ] Verifying sanlock lockspace initialization
[ INFO  ] Creating Image for 'hosted-engine.lockspace' ...
[ INFO  ] Image for 'hosted-engine.lockspace' created successfully
[ INFO  ] Creating Image for 'hosted-engine.metadata' ...
[ INFO  ] Image for 'hosted-engine.metadata' created successfully
[ INFO  ] Creating VM Image
[ INFO  ] Destroying Storage Pool
[ INFO  ] Start monitoring domain
[ INFO  ] Configuring VM
[ INFO  ] Updating hosted-engine configuration
[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up
[ INFO  ] Creating VM
          You can now connect to the VM with the following command:
                /bin/remote-viewer vnc://localhost:5900
          Use temporary password "6908HHTq" to connect to vnc console.
          Please note that in order to use remote-viewer you need to be able to run graphical applications.
          This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding).
          Otherwise you can run the command from a terminal in your preferred desktop environment.
          If you cannot run graphical applications you can connect to the graphic console from another host or connect to the serial console using the following command:
          socat UNIX-CONNECT:/var/run/ovirt-vmconsole-console/d0d6af2e-d683-41b8-870b-a4ff6ee736db.sock,user=ovirt-vmconsole STDIO,raw,echo=0,escape=1
          Please ensure that your Guest OS is properly configured to support serial console according to your distro documentation.
          Follow http://www.ovirt.org/Serial_Console_Setup#I_need_to_access_the_console_the_old_way for more info.
          If you need to reboot the VM you will need to start it manually using the command:
          hosted-engine --vm-start
          You can then set a temporary password using the command:
          hosted-engine --add-console-password
        
        
          The VM has been started.
          To continue please install OS and shutdown or reboot the VM.
        
          Make a selection from the options below:
          (1) Continue setup - OS installation is complete
          (2) Abort setup
          (3) Power off and restart the VM
          (4) Destroy VM and abort setup
        
          (1, 2, 3, 4)[1]:

- brctl show
bridge name     bridge id               STP enabled     interfaces
;vdsmdummy;             8000.000000000000       no
ovirtmgmt               8000.001a647a9462       no              bond0.162
                                                        vnet0
virbr0          8000.525400976f73       yes             virbr0-nic

- Everything working as expected, thanks...

Comment 11 Michael Burman 2016-02-03 09:30:27 UTC
Created attachment 1120691 [details]
record vlan over bond

Comment 12 Dan Kenigsberg 2016-02-03 10:03:39 UTC
Michael, on top of the video, would you share the answer file for your hosted-engine-setup?

Comment 13 Michael Burman 2016-02-03 13:17:48 UTC
Created attachment 1120780 [details]
answers conf

Comment 14 Simone Tiraboschi 2016-02-18 11:19:28 UTC
It should work with ovirt-hosted-engine-setup 1.3.3.3

Comment 19 Michael Burman 2016-02-23 09:46:32 UTC
Verified on - ovirt-hosted-engine-setup-1.3.3.3-1.el7ev.noarch

Both with rhel 7.2 and rhev-h 7.2 20160222.0.el7ev 
with:
rhevm-appliance-20160212.0-1
engine-3.6.3.1-0.1.el6
vdsm-4.17.21-0.el7ev

Tested over vlan NIC and vlan BOND with success

Comment 21 errata-xmlrpc 2016-03-09 19:08:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-0375.html

Comment 22 Simone Tiraboschi 2018-01-12 10:58:39 UTC
*** Bug 1533624 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.