Bug 1676708
Summary: | [UI] hint after updating mtu on networks connected to running VMs and indicate vNICs out of sync | ||||||
---|---|---|---|---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Sergey <serg> | ||||
Component: | General | Assignee: | eraviv | ||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | Michael Burman <mburman> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 4.3.0 | CC: | bugs, dholler, eraviv, gveitmic, mperina, serg | ||||
Target Milestone: | ovirt-4.4.6 | Flags: | pm-rhel:
ovirt-4.4+
|
||||
Target Release: | 4.4.6.4 | ||||||
Hardware: | Unspecified | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | ovirt-engine-4.4.6.4 | Doc Type: | If docs needed, set a value | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | |||||||
: | 1766414 (view as bug list) | Environment: | |||||
Last Closed: | 2021-05-05 05:35:54 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Network | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1113630, 1766414, 1848986 | ||||||
Attachments: |
|
Description
Sergey
2019-02-12 22:34:59 UTC
Sergey, would you please share the vdsm.log of the source and destination host, and most important, the engine.log containing the migration? QE can't reproduce on 4.3.0.4-0.1.el7 Please note that it is not supported to update network's MTU while it used by VM, the change will fail on vdsm side: "VDSM host_mixed_3 command HostSetupNetworksVDS failed: Bridge mtu has interfaces set([u'vnet0']) connected" You need first unplug the vNIC from the VM, update the network's MTU, wait the change applied successfully on the host(UI notification), plug the vNIC back. Now the MTU updated successfully and preserved after migration. Created attachment 1534745 [details]
engine and vdsm logs
Attached engine and vdsm logs from src and dst host, don't pay attention to errors about failed network creation, I've created it on wrong interface in our test env.
Migrating VM name: empty-no-os
Net name: test-vlan-noconn
Net VDSM Name: on68b632b6f2134
Before migration:
34: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master on68b632b6f2134 state UNKNOWN group default qlen 1000
After:
36: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master on68b632b6f2134 state UNKNOWN group default qlen 1000
(In reply to Michael Burman from comment #2) > QE can't reproduce on 4.3.0.4-0.1.el7 > > Please note that it is not supported to update network's MTU while it used > by VM, the change will fail on vdsm side: > "VDSM host_mixed_3 command HostSetupNetworksVDS failed: Bridge mtu has > interfaces set([u'vnet0']) connected" > > You need first unplug the vNIC from the VM, update the network's MTU, wait > the change applied successfully on the host(UI notification), plug the vNIC > back. Now the MTU updated successfully and preserved after migration. But in fact I've tested on 2 installations and both gave no errors while updating MTU, and actually changed MTU on host side, on both I've used "Linux Bridge" switch type and VLAN network, maybe network type(connected or vlan) is critical here. (In reply to Sergey from comment #4) > (In reply to Michael Burman from comment #2) > > QE can't reproduce on 4.3.0.4-0.1.el7 > > > > Please note that it is not supported to update network's MTU while it used > > by VM, the change will fail on vdsm side: > > "VDSM host_mixed_3 command HostSetupNetworksVDS failed: Bridge mtu has > > interfaces set([u'vnet0']) connected" > > > > You need first unplug the vNIC from the VM, update the network's MTU, wait > > the change applied successfully on the host(UI notification), plug the vNIC > > back. Now the MTU updated successfully and preserved after migration. > > But in fact I've tested on 2 installations and both gave no errors while > updating MTU, and actually changed MTU on host side, on both I've used > "Linux Bridge" switch type and VLAN network, maybe network type(connected or > vlan) is critical here. The behavior of the host should not depend on the network type. Looks like the updated MTU was never propagated to libvirt and the guest OS. The expected behavior is documented in https://ovirt.org/develop/release-management/features/network/managed_mtu_for_vm_networks.html#update-mtu-flow Do you have a suggestion what would help you to know that the unplug/plug step is required? (In reply to Dominik Holler from comment #5) > The behavior of the host should not depend on the network type. > Looks like the updated MTU was never propagated to libvirt and the guest OS. > The expected behavior is documented in > https://ovirt.org/develop/release-management/features/network/ > managed_mtu_for_vm_networks.html#update-mtu-flow > > Do you have a suggestion what would help you to know that the unplug/plug > step is required? Thanks for a link, now I can see, that it should not work, when mtu on VM device changed to 9000 without any actions on VM, and ping with large packets started to flow(after also changing MTU inside), it made me believe, that migration also should work without problems, it was the only feature to get MTU update fully function from my point of view :) Maybe warning message, when saving net with changed mtu, stating that NIC unplug/plug or VM shutdown/poweron required to change MTU, also it can include list of affected VMs. Or "next run config", but next run has a drawback, it won't clear after unplugging/plugging NIC. Verified on - rhvm-4.4.6.5-447.gd80dda7.9.el8ev.noarch This bugzilla is included in oVirt 4.4.6 release, published on May 4th 2021. Since the problem described in this bug report should be resolved in oVirt 4.4.6 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |