Bug 1484818 - Bandwidth is ignored for VLAN migration network
Summary: Bandwidth is ignored for VLAN migration network
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Network
Version: 4.1.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.2.1
: ---
Assignee: Leon Goldberg
QA Contact: Michael Burman
URL:
Whiteboard:
Depends On:
Blocks: 1530063
TreeView+ depends on / blocked
 
Reported: 2017-08-24 10:59 UTC by Roman Hodain
Modified: 2021-09-09 12:34 UTC (History)
8 users (show)

Fixed In Version:
Clone Of:
: 1530063 (view as bug list)
Environment:
Last Closed: 2018-02-12 11:49:35 UTC
oVirt Team: Network
Embargoed:
rule-engine: ovirt-4.2+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-43453 0 None None None 2021-09-09 12:34:37 UTC
oVirt gerrit 85736 0 'None' MERGED backend: set vlan speeds based on base interface 2020-07-13 17:56:24 UTC

Description Roman Hodain 2017-08-24 10:59:58 UTC
Description of problem:
When a VM network is marked also as a migration network the bandwith is not stored for this network and the Migration policy - Migration bandwidth AUTO doe snot work as it is not able to negotiate the bandwidth if the network.

Version-Release number of selected component (if applicable):
rhevm-4.1.5.2-0.1.el7

How reproducible:
100%

Steps to Reproduce:
1. Set Migration bandwidth limit to AUTO
2. Set a VM network as migration network
3. Start migration.

Actual results:
maxBandwidth parameter is not sent in the request from the engine. So the vdsm configuration is used instead

Expected results:
The bandwidth is set based on the interface connected to the linux bridge.

Additional info:
vdsm is not able to determine the bandwidth of the linux bridge so it by default sets the speed to 1000, but engine even doe snot set this in the DB.

engine=# select name, network_name, is_bond, bond_name, bond_type,bond_opts, vlan_id, speed, addr, bridged, base_interface from vds_interface where vds_id='f1e96245-7ca3-4729-8abf-980501380a2b';
   name    | network_name | is_bond | bond_name | bond_type |              bond_opts               | vlan_id | speed |     addr     | bridged | base_interface 
-----------+--------------+---------+-----------+-----------+--------------------------------------+---------+-------+--------------+---------+----------------
 em3       |              | f       | bond0     |           |                                      |         |  1000 |              | f       | 
 em2       |              | f       |           |           |                                      |         |     0 |              | f       | 
 bond0.500 | VM-private   | f       |           |           |                                      |     500 |       | 172.31.16.66 | t       | bond0
 bond0.274 | VM-public    | f       |           |           |                                      |     274 |       |              | t       | bond0
 bond0     |              | t       |           |           | mode=4 miimon=100 xmit_hash_policy=2 |         |       |              | f       | 
 em4       |              | f       | bond0     |           |                                      |         |  1000 |              | f       | 
 idrac     |              | f       |           |           |                                      |         |     0 | 169.254.0.2  | f       | 
 em1       | ovirtmgmt    | f       |           |           |                                      |         |  1000 | 10.37.192.42 | t       | 
(8 rows)
42 | t       | 
(8 rows)

Also the speed of the bond interface is not stored even though the vdsm reports correct speed 2000.

Comment 1 Dan Kenigsberg 2017-08-27 20:09:31 UTC
Bug 1464055 tracks the issue with bond interfaces.

Comment 2 Edward Haas 2017-11-09 09:48:58 UTC
Has this been attempted with a migration network over a bond or over a nic?

Comment 3 Israel Pinto 2017-11-20 13:12:28 UTC
I check it with:
Software Version:4.2.0-0.0.master.20171119135709.git6d448d3.el7.centos

Steps:
1. Set migration network and migration VM with AUTO BW in policy Legacy.
2. Migration VM 
 During the migration:
1. Check on source host BW with: virsh -r domjobinfo 2
2. check on Engine the network migration status

Results:
Engine:
No report on migration-net speed
engine=# select name, network_name, is_bond, bond_name, bond_type,bond_opts, vlan_id, speed, addr, bridged, base_interface from vds_interface where vds_id='88d8e154-8307-4865-8e21-1e3e0a08b20c';
    name    | network_name  | is_bond | bond_name | bond_type | bond_opts | vlan_id | speed |     addr      | bridged | base_interface 
------------+---------------+---------+-----------+-----------+-----------+---------+-------+---------------+---------+----------------
 enp4s0     | ovirtmgmt     | f       |           |           |           |         |  1000 | 10.35.128.15  | t       | 
 enp6s0     |               | f       |           |           |           |         |  1000 |               | f       | 
 ens1f1     |               | f       |           |           |           |         |  1000 |               | f       | 
 ens1f0     |               | f       |           |           |           |         |  1000 |               | f       | 
 enp6s0.162 | migration-net | f       |           |           |           |     162 |       | 10.35.129.146 | t       | enp6s0
(5 rows)

On host:
BW is reported during migration:
Memory bandwidth: 111.401 MiB/s

[root@orchid-vds1 ~]# virsh -r domjobinfo 2
Job type:         Unbounded   
Operation:        Outgoing migration
Time elapsed:     15547        ms
Data processed:   1.488 GiB
Data remaining:   113.613 MiB
Data total:       4.095 GiB
Memory processed: 1.488 GiB
Memory remaining: 113.613 MiB
Memory total:     4.095 GiB
Memory bandwidth: 111.401 MiB/s
Dirty rate:       0            pages/s
Iteration:        1           
Constant pages:   656537      
Normal pages:     387741      
Normal data:      1.479 GiB
Expected downtime: 101          ms
Setup time:       67           ms

Comment 4 Israel Pinto 2017-11-20 13:38:40 UTC
Also check with: uncheck the VM option on migration-net 
Same results.
engine=# select name, network_name, is_bond, bond_name, bond_type,bond_opts, vlan_id, speed, addr, bridged, base_interface from vds_interface where vds_id='88d8e154-8307-4865-8e21-1e3e0a08b20c';
    name    | network_name  | is_bond | bond_name | bond_type | bond_opts | vlan_id | speed |     addr     | bridged | base_interface 
------------+---------------+---------+-----------+-----------+-----------+---------+-------+--------------+---------+----------------
 ens1f0     |               | f       |           |           |           |         |  1000 |              | f       | 
 enp6s0.162 | migration-net | f       |           |           |           |     162 |       |              | f       | enp6s0
 enp4s0     | ovirtmgmt     | f       |           |           |           |         |  1000 | 10.35.128.15 | t       | 
 enp6s0     |               | f       |           |           |           |         |  1000 |              | f       | 
 ens1f1     |               | f       |           |           |           |         |  1000 |              | f       | 
(5 rows)

It looks like it not worked before.

Comment 5 Roman Hodain 2017-12-02 14:54:17 UTC
(In reply to Edward Haas from comment #2)
> Has this been attempted with a migration network over a bond or over a nic?

It is over bond. The network is a VM network with a VLAN tag.

VM-private (migration network) -> bond0.500 -> bond0 -> LACP (em3,em4)

Comment 6 Michael Burman 2018-01-15 09:31:15 UTC
Verified on - 4.2.1.1-0.1.el7 

Bandwidth is no longer ignored for vlan migration network.

-[ RECORD 5 ]--+--------------
name           | enp6s0.162
network_name   | migration-net
is_bond        | f
bond_name      | 
bond_type      | 
bond_opts      | 
vlan_id        | 162
speed          | 1000
addr           | 10.35.1x9.x
bridged        | t
base_interface | enp6s0


-[ RECORD 1 ]--+-------------------------------------
name           | bond0.162
network_name   | migration-net
is_bond        | 
bond_name      | 
bond_type      | 
bond_opts      | 
vlan_id        | 162
speed          | 2000
addr           | 10.35.1x9.1x9
bridged        | t
base_interface | bond0
-[ RECORD 2 ]--+-------------------------------------
name           | bond0
network_name   | 
is_bond        | t
bond_name      | 
bond_type      | 
bond_opts      | mode=4 miimon=100 xmit_hash_policy=2
vlan_id        | 
speed          | 2000
addr           | 
bridged        | f
base_interface |
-[ RECORD 3 ]--+-------------------------------------
name           | enp12s0f0
network_name   | 
is_bond        | f
bond_name      | bond0
bond_type      | 
bond_opts      | 
vlan_id        | 
speed          | 1000
addr           | 
bridged        | f
base_interface | 
-[ RECORD 6 ]--+-------------------------------------
name           | enp12s0f1
network_name   | 
is_bond        | f
bond_name      | bond0
bond_type      | 
bond_opts      | 
vlan_id        | 
speed          | 1000
addr           | 
bridged        | f
base_interface |


maxBandwidth parameter is sent.

Examples:
AUTO in legacy - maxBandwidth='500' (1Gb link)
[root@silver-vdsb ~]# virsh -r domjobinfo 4
Memory bandwidth: 112.766 MiB/s

CUSTOM in legacy - maxBandwidth='25' (set custom with 50Mbps limit)
[root@orchid-vds2 ~]# virsh -r domjobinfo 7
Memory bandwidth: 25.021 MiB/s

Comment 7 Sandro Bonazzola 2018-02-12 11:49:35 UTC
This bugzilla is included in oVirt 4.2.1 release, published on Feb 12th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.1 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.