Bug 1533624 - [RFE] Let the user create a storage specific logical network (handling also vlan tag id)
Summary: [RFE] Let the user create a storage specific logical network (handling also v...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: ovirt-hosted-engine-setup
Classification: oVirt
Component: RFEs
Version: 2.2.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium with 1 vote
Target Milestone: ---
: ---
Assignee: Simone Tiraboschi
QA Contact: Nikolai Sednev
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-01-11 19:16 UTC by Vinícius Ferrão
Modified: 2022-03-11 14:20 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-06-18 07:32:11 UTC
oVirt Team: Integration
Embargoed:
sbonazzo: ovirt-4.4-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1530839 0 high CLOSED Deployment fails configuring ovirtmgmt interface if a VLAN exists on the management interface 2021-02-22 00:41:40 UTC
Red Hat Issue Tracker RHV-45147 0 None Closed RHEL EUS Errata Documentation 2022-06-16 06:26:55 UTC

Description Vinícius Ferrão 2018-01-11 19:16:51 UTC
Description of problem:
oVirt Hosted Engine can't be deployed when the storage network is a VLAN on top a bonding that will be the ovirtmgmt interface. Not sure if the bonding have any influence on the problem.

Version-Release number of selected component (if applicable):
4.2.0

How reproducible:
100%

Steps to Reproduce:
1. Configure network interfaces within Network Manager: management and storage network on a VLAN.
2. Launch ovirt-hosted-engine-setup
3. During the configuration phase of the ovirtmgmt interface the VLAN got the IP address removed, making the storage networking unreachable.

Actual results:
Failed to deploy HE on this topology

Expected results:
Successfully deployed HE with separated storage and management traffic.

Additional info:
Here's an ASCII drawing of the network topology:

+----------------------+
| HE Storage (VLAN 10) |
+----------------------+
|   LACP Bond (bond0)  |
+----------++----------+
|    eth0  ||  eth1    |
+----------++----------+

bond0 will become the ovirtmgmt interface
bond0.10@bond0 is the storage network needed for hosted engine
eth0 and eth1 are obviously the physical NICs

If VDSM is informed of the storage network running on top of bond0 it will complete the installation successfully, this was suggested by Edward Haas on a bugfix ticket:

What you can try an do is something like this:
- Start VDSM (vdsmd service).
- Set the storage network using vdsm.
  vdsm-client -f network.json Host setupNetworks
- Start the setup as done so far.

cat network.json:
{"networks": {"storage": {"bonding": "bond0", "bridged": false, "vlan": 10, "ipaddr": "192.168.10.3", "netmask": "255.255.255.240", "defaultRoute": false}}, "bondings": {}, "options": {"connectivityCheck": false}}

Comment 1 Simone Tiraboschi 2018-01-12 10:58:39 UTC
It should be already there,
please see:
https://bugzilla.redhat.com/show_bug.cgi?id=1530839#c26

and:
https://bugzilla.redhat.com/show_bug.cgi?id=1196038

*** This bug has been marked as a duplicate of bug 1196038 ***

Comment 2 Simone Tiraboschi 2018-01-12 13:45:23 UTC
Ok, understood, sorry, this is not a duplicate of 1196038 since here the user wants to create the management bridge over bond0 without any vlan tag but at the same time he wants to use bond0 .10 with vlan tag 10 just to reach the storage with the same bond.

Comment 3 Vinícius Ferrão 2018-01-12 14:17:11 UTC
Thank you for looking deeply Simone. May I add some points that I think it’s relevant to the issue?

1. Why I’m using this topology? To achieve isolation and redundancy for the Hosted Engine Storage Network. Since it’s a bonded interface with a tagged VLAN ajuste for this NFS. I would prefer using iSCSI instead, but iSCSI Multipath isn’t supported on this stage of deployment.

2. Edward Haas already posted an workaround and a bug fix on VDSM, and it solved the question. Since its only needing some refinements on the handlers of Hosted Engine deploying and perhaps on VDSM, can I ask this to be evaluated to a point release of oVirt 4.2 instead of 4.3? 4.3 is so far away...

Thanks once again for developing the awesome product that oVirt is and listening issues from non RHV users.

V.

Comment 4 Simone Tiraboschi 2018-01-12 17:43:24 UTC
(In reply to Vinícius Ferrão from comment #3)
> 2. Edward Haas already posted an workaround and a bug fix on VDSM, and it
> solved the question. Since its only needing some refinements on the handlers
> of Hosted Engine deploying and perhaps on VDSM, can I ask this to be
> evaluated to a point release of oVirt 4.2 instead of 4.3? 4.3 is so far
> away...

Currently hosted-engine setup is taking care just of the management network, handling also other logical networks is a new feature and new features are in general for .y release.

You still have an alternative path:
1. avoid creating the bond at deploy time and select eth0 for the management network and access the storage via eth1.10
2. deploy hosted-engine
3. manually add a not required logical network from the engine and set the vlan tag on it at engine level
4. add you first additional HE host from the engine and configure the network for that host from the engine: create the a bond there and assign both the logical networks (management and the vlan tagged one for the storage) to that bond
5. migrate the engine VM to host2, set host 1 in maintenance and repeat step 4 for the first host

Comment 5 Sandro Bonazzola 2018-06-18 07:32:11 UTC
We are working on adding multipath support in initial deployment in bug #1193961.

Comment 6 Vinícius Ferrão 2019-01-08 16:14:11 UTC
Hello, just an update. Now with the Ansible ovirt-hosted-engine-setup the same issue happens, but it's way easier to handle. Ansible stops when asking for the storage network with a retry options. When this happens it's time to run the "vdsm-client -f network.json Host setupNetworks" trick and the engine was successfully deployed and working as expected.


Note You need to log in before you can comment on or make changes to this bug.