Hide Forgot
Created attachment 1522181 [details] vdsm log files Description of problem: I've recently upgraded my hosts to ovirt 4.2 which includes vdsm 4.20.43. I've configured migration_max_bandwidth = 150, however the bandwidth used seems to be whatever the actual maximum is of the link and migration_max_bandwidth seem sto be ignored. Version-Release number of selected component (if applicable): vdsm 4.20.43 How reproducible: 100% Steps to Reproduce: 1. Upgrade from 4.1.9 to 4.2.7 2. Update vdsm from the 4.2 repo 3. Reconfigure vdsm.conf and set migration_max_bandwidth = 150 Actual results: migration_max_bandwidth is ignored and the migration speed is the max the link supports Expected results: migration goes no faster than the speed configured in migration_max_bandwidth Additional info: oaded plugins: enabled_repos_upload, fastestmirror, langpacks, package_upload, product-id, search-disabled-repos, subscription-manager, vdsmupgrade This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * ovirt-4.2: www.gtlib.gatech.edu * ovirt-4.2-epel: mirror.steadfastnet.com Installed Packages Name : vdsm Arch : x86_64 Version : 4.20.43 Release : 1.el7 Size : 193 k Repo : installed From repo : ovirt-4.2 Summary : Virtual Desktop Server Manager URL : http://www.ovirt.org/develop/developer-guide/vdsm/vdsm/ License : GPLv2+ Description : The VDSM service is required by a Virtualization Manager to manage the : Linux hosts. VDSM manages and monitors the host's storage, memory and : networks as well as virtual machine creation, other host administration : tasks, statistics gathering, and log collection. It seems odd that I've got 4.20.43 installed, but the bugzilla only goes up to 4.20.31
I should probably also note that this doesn't really seem to be a significant bug for us, just an annoyance during an unlikely edge case. Namely migrating a VM with a lot of RAM or lots of VMs while also doing a lot of disk writes to the guests that live on the hosts on either end of the migration. We use cinder to connect to ceph over the same IBoIP link that we use as a migration network.
migration_max_bandwidth from vdsm.conf is just a fallback, if engine sends no value. Can you please check vdsm.log for the string maxBandwidth and the migration policy in the "Edit Cluster" dialog?
That parameter is not used in migration policies since 4.0. You need to check/change the appropriate policy instead
(In reply to Michal Skrivanek from comment #3) > That parameter is not used in migration policies since 4.0. You need to > check/change the appropriate policy instead Michal, do you think this should mentioned in vdsm.conf.sample?
it could, though that's really internal docs
(In reply to Michal Skrivanek from comment #5) > it could, though that's really internal docs I am not aware of other documentation. Is there any other doc for vdsm.conf, or do you consider vdsm.conf.sample as the best place for this information?
I do have the maxBandwidth item in the log, which it also seems to be ignoring [root@vm-int3 ~]# grep maxBandwidth vdsm_bug/vdsm.log 2019-01-21 09:30:01,338-0600 INFO (jsonrpc/4) [api.virt] START migrate(params={'incomingLimit': 1, 'src': 'vm-int3', 'dstqemu': '10.250.6.101', 'autoConverge': 'true', 'tunneled': 'false', 'enableGuestEvents': True, 'dst': 'vm-int1:54321', 'convergenceSchedule': {'init': [{'params': ['100'], 'name': 'setDowntime'}], 'stalling': [{'action': {'params': ['150'], 'name': 'setDowntime'}, 'limit': 1}, {'action': {'params': ['200'], 'name': 'setDowntime'}, 'limit': 2}, {'action': {'params': ['300'], 'name': 'setDowntime'}, 'limit': 3}, {'action': {'params': ['400'], 'name': 'setDowntime'}, 'limit': 4}, {'action': {'params': ['500'], 'name': 'setDowntime'}, 'limit': 6}, {'action': {'params': ['5000'], 'name': 'setDowntime'}, 'limit': -1}, {'action': {'params': [], 'name': 'abort'}, 'limit': -1}]}, 'vmId': 'a533f32e-922c-4b3b-b94c-b93a78484cc2', 'abortOnError': 'true', 'outgoingLimit': 1, 'compressed': 'true', 'maxBandwidth': 40000, 'method': 'online'}) from=::ffff:10.128.8.36,51616, flow_id=4e4d798c-0221-468b-adf5-244bce25a502, vmId=a533f32e-922c-4b3b-b94c-b93a78484cc2 (api:46) [root@vm-int3 ~]# 'maxBandwidth': 40000 [root@vm-int3 ~]# grep speed vdsm_bug/vdsm.log | grep -i progress 2019-01-21 09:30:11,810-0600 INFO (migmon/a533f32e) [virt.vm] (vmId='a533f32e-922c-4b3b-b94c-b93a78484cc2') Migration Progress: 10 seconds elapsed, 72% of data processed, total data: 8264MB, processed data: 3523MB, remaining data: 2394MB, transfer speed 371MBps, zero pages: 604049MB, compressed: 0MB, dirty rate: 0, memory iteration: 1 (migration:867) [root@vm-int3 ~]#
Created attachment 1522402 [details] Screenshot of cluster configuration
Provided requested information
(In reply to Logan Kuhn from comment #8) Thanks for the additional information. > I do have the maxBandwidth item in the log, which it also seems to be ignoring No, it is not ignored, because grep maxBandwidth vdsm.log will show you the requested max bandwidth in MiB/s Because this is only an internal API, the API is documented only in https://github.com/oVirt/vdsm/blob/cfd81b73b2b166f65d9839813db034f3819ab87d/lib/vdsm/api/vdsm-api.yml#L3398 > Created attachment 1522402 [details] > Screenshot of cluster configuration If you want to set the max bandwidth manually, you can select "Custom" in as "Migration bandwidth limit (Mbps)" and enter the desired bandwidth. Please note that the unit in ovirt-4.2 is MiB/s, and in ovirt-4.3 Mbps, see bug 1643476. Please note additionally that the bandwidth per migration is only a fraction if this value, usually the half, to respect incoming and outgoing migrations. If you set a custom bandwidth in the migration policy, is it applied as expected for you?
Yup, that does exactly what I would expect it to do... Thanks for your help, this can be closed.
(In reply to Logan Kuhn from comment #11) > Yup, that does exactly what I would expect it to do... Thanks for your help, > this can be closed. Thanks for the feedback.
Dominik, i think we better change the summary of this report?
will be fixed in vdsm > v4.30.8