+++ This bug was initially created as a clone of Bug #1484818 +++ Description of problem: When a VM network is marked also as a migration network the bandwith is not stored for this network and the Migration policy - Migration bandwidth AUTO doe snot work as it is not able to negotiate the bandwidth if the network. Version-Release number of selected component (if applicable): rhevm-4.1.5.2-0.1.el7 How reproducible: 100% Steps to Reproduce: 1. Set Migration bandwidth limit to AUTO 2. Set a VM network as migration network 3. Start migration. Actual results: maxBandwidth parameter is not sent in the request from the engine. So the vdsm configuration is used instead Expected results: The bandwidth is set based on the interface connected to the linux bridge. Additional info: vdsm is not able to determine the bandwidth of the linux bridge so it by default sets the speed to 1000, but engine even doe snot set this in the DB. engine=# select name, network_name, is_bond, bond_name, bond_type,bond_opts, vlan_id, speed, addr, bridged, base_interface from vds_interface where vds_id='f1e96245-7ca3-4729-8abf-980501380a2b'; name | network_name | is_bond | bond_name | bond_type | bond_opts | vlan_id | speed | addr | bridged | base_interface -----------+--------------+---------+-----------+-----------+--------------------------------------+---------+-------+--------------+---------+---------------- em3 | | f | bond0 | | | | 1000 | | f | em2 | | f | | | | | 0 | | f | bond0.500 | VM-private | f | | | | 500 | | 172.31.16.66 | t | bond0 bond0.274 | VM-public | f | | | | 274 | | | t | bond0 bond0 | | t | | | mode=4 miimon=100 xmit_hash_policy=2 | | | | f | em4 | | f | bond0 | | | | 1000 | | f | idrac | | f | | | | | 0 | 169.254.0.2 | f | em1 | ovirtmgmt | f | | | | | 1000 | 10.37.192.42 | t | (8 rows) 42 | t | (8 rows) Also the speed of the bond interface is not stored even though the vdsm reports correct speed 2000. --- Additional comment from Dan Kenigsberg on 2017-08-27 16:09:31 EDT --- Bug 1464055 tracks the issue with bond interfaces. --- Additional comment from Edward Haas on 2017-11-09 04:48:58 EST --- Has this been attempted with a migration network over a bond or over a nic? --- Additional comment from Israel Pinto on 2017-11-20 08:12:28 EST --- I check it with: Software Version:4.2.0-0.0.master.20171119135709.git6d448d3.el7.centos Steps: 1. Set migration network and migration VM with AUTO BW in policy Legacy. 2. Migration VM During the migration: 1. Check on source host BW with: virsh -r domjobinfo 2 2. check on Engine the network migration status Results: Engine: No report on migration-net speed engine=# select name, network_name, is_bond, bond_name, bond_type,bond_opts, vlan_id, speed, addr, bridged, base_interface from vds_interface where vds_id='88d8e154-8307-4865-8e21-1e3e0a08b20c'; name | network_name | is_bond | bond_name | bond_type | bond_opts | vlan_id | speed | addr | bridged | base_interface ------------+---------------+---------+-----------+-----------+-----------+---------+-------+---------------+---------+---------------- enp4s0 | ovirtmgmt | f | | | | | 1000 | 10.35.128.15 | t | enp6s0 | | f | | | | | 1000 | | f | ens1f1 | | f | | | | | 1000 | | f | ens1f0 | | f | | | | | 1000 | | f | enp6s0.162 | migration-net | f | | | | 162 | | 10.35.129.146 | t | enp6s0 (5 rows) On host: BW is reported during migration: Memory bandwidth: 111.401 MiB/s [root@orchid-vds1 ~]# virsh -r domjobinfo 2 Job type: Unbounded Operation: Outgoing migration Time elapsed: 15547 ms Data processed: 1.488 GiB Data remaining: 113.613 MiB Data total: 4.095 GiB Memory processed: 1.488 GiB Memory remaining: 113.613 MiB Memory total: 4.095 GiB Memory bandwidth: 111.401 MiB/s Dirty rate: 0 pages/s Iteration: 1 Constant pages: 656537 Normal pages: 387741 Normal data: 1.479 GiB Expected downtime: 101 ms Setup time: 67 ms --- Additional comment from Israel Pinto on 2017-11-20 08:38:40 EST --- Also check with: uncheck the VM option on migration-net Same results. engine=# select name, network_name, is_bond, bond_name, bond_type,bond_opts, vlan_id, speed, addr, bridged, base_interface from vds_interface where vds_id='88d8e154-8307-4865-8e21-1e3e0a08b20c'; name | network_name | is_bond | bond_name | bond_type | bond_opts | vlan_id | speed | addr | bridged | base_interface ------------+---------------+---------+-----------+-----------+-----------+---------+-------+--------------+---------+---------------- ens1f0 | | f | | | | | 1000 | | f | enp6s0.162 | migration-net | f | | | | 162 | | | f | enp6s0 enp4s0 | ovirtmgmt | f | | | | | 1000 | 10.35.128.15 | t | enp6s0 | | f | | | | | 1000 | | f | ens1f1 | | f | | | | | 1000 | | f | (5 rows) It looks like it not worked before. --- Additional comment from Roman Hodain on 2017-12-02 09:54:17 EST --- (In reply to Edward Haas from comment #2) > Has this been attempted with a migration network over a bond or over a nic? It is over bond. The network is a VM network with a VLAN tag. VM-private (migration network) -> bond0.500 -> bond0 -> LACP (em3,em4)
Verified on - 4.2.1.1-0.1.el7 Bandwidth is no longer ignored for vlan migration network. -[ RECORD 5 ]--+-------------- name | enp6s0.162 network_name | migration-net is_bond | f bond_name | bond_type | bond_opts | vlan_id | 162 speed | 1000 addr | 10.35.1x9.x bridged | t base_interface | enp6s0 -[ RECORD 1 ]--+------------------------------------- name | bond0.162 network_name | migration-net is_bond | bond_name | bond_type | bond_opts | vlan_id | 162 speed | 2000 addr | 10.35.1x9.1x9 bridged | t base_interface | bond0 -[ RECORD 2 ]--+------------------------------------- name | bond0 network_name | is_bond | t bond_name | bond_type | bond_opts | mode=4 miimon=100 xmit_hash_policy=2 vlan_id | speed | 2000 addr | bridged | f base_interface | -[ RECORD 3 ]--+------------------------------------- name | enp12s0f0 network_name | is_bond | f bond_name | bond0 bond_type | bond_opts | vlan_id | speed | 1000 addr | bridged | f base_interface | -[ RECORD 6 ]--+------------------------------------- name | enp12s0f1 network_name | is_bond | f bond_name | bond0 bond_type | bond_opts | vlan_id | speed | 1000 addr | bridged | f base_interface | maxBandwidth parameter is sent. Examples: AUTO in legacy - maxBandwidth='500' (1Gb link) [root@silver-vdsb ~]# virsh -r domjobinfo 4 Memory bandwidth: 112.766 MiB/s CUSTOM in legacy - maxBandwidth='25' (set custom with 50Mbps limit) [root@orchid-vds2 ~]# virsh -r domjobinfo 7 Memory bandwidth: 25.021 MiB/s
AUTO in legacy - maxBandwidth='1000' (bond 2Gb)
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:1488
BZ<2>Jira Resync