Description of problem: libvirt.libvirtError: Cannot set interface MTU on '(null)': No such device 2021-03-18 13:08:04.836 7 ERROR nova.virt.libvirt.driver [req-3f7653ff-623d-43fe-9b6c-a17aebe405aa 1c0c526e8d134914b766a1a6354b56bf 3db3d1b6a1e3469da95693d49f4fd308 - default default] [instance: 5aee63f2-7293-412e-895b-0b4282048c1d] Failed to start libvirt guest: libvirt.libvirtError: Cannot set interface MTU on '(null)': No such device Version-Release number of selected component (if applicable): qemu-kvm-5.1.0-14.el8.1.x86_64 libvirt-client-6.6.0-7.3.el8.x86_64 How reproducible: Steps to Reproduce: 1.create VM with the following interface: <interface type='vhostuser'> <mac address='fa:16:3e:92:6d:79'/> <source type='unix' path='/var/lib/vhost_sockets/sock7f9a971a-cf3' mode='server'/> <model type='virtio'/> <driver rx_queue_size='512' tx_queue_size='512'/> <mtu size='8942'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> Actual results: libvirt.libvirtError: Cannot set interface MTU on '(null)': No such device Expected results: VM should start Additional info:
Moshe, can you please attach debug logs? I vaguely recall fixing something like this, not that long ago.
Now that I'm thinking about it more, this resembles bug 1767013 where the problem is also that libvirt was unable to detect interface name (and thus could not set MTU). Moshe, can you please test this scratch build: https://mprivozn.fedorapeople.org/ovs/ I've backported patches that fix the problem onto rhel-av-8.3.1 version of libvirt.
It is not working : libvirt.libvirtError: Cannot set interface MTU on '': No such device 2021-03-21 09:03:37.306 7 ERROR nova.virt.libvirt.driver [req-2e152093-d6f0-409d-ba42-ee0a5ef73093 1c0c526e8d134914b766a1a6354b56bf 3db3d1b6a1e3469da95693d49f4fd308 - default default] [instance: e943d688-5947-43d3-818b-d0e1507790da] Failed to start libvirt guest: libvirt.libvirtError: Cannot set interface MTU on '': No such device How do detect the vhost user device? is it from OVS?
Yes. It asks ovs for the name. Can you please attach debug logs? Here's the function that constructs ovs-vsctl command and extracts the name: https://gitlab.com/libvirt/libvirt/-/blob/master/src/util/virnetdevopenvswitch.c#L529
Created attachment 1766199 [details] libvirt debug log
So I understand why it not working now. We have proprietary of ovs-dpdk version that for vdpa but it not using the vhost-server-path. I requested them to build a new ovs-dpdk version which uses vhost-server-path and see if that solve the issue. I will update you when I have the results.
Right, this is the command that libvirt executes in order to learn the interface name: 2021-03-25 07:35:25.138+0000: 2377188: debug : virCommandRunAsync:2619 : About to run ovs-vsctl --timeout=5 --no-headings --columns=name find Interface options:vhost-server-path=/var/lib/vhost_sockets/sockb6d8e63d-aa9 2021-03-25 07:35:25.139+0000: 2377188: debug : virCommandRunAsync:2622 : Command result 0, with PID 2377905 2021-03-25 07:35:25.148+0000: 2377188: debug : virCommandRun:2464 : Result exit status 0, stdout: '' stderr: '2021-03-25 07:35:25.139+0000: 2377905: debug : virFileClose:135 : Closed fd 35 But ovs-vsctl returned nothing. Let me know whether the ovs rebuild helped, please.
so out ovs part is running in container. Is there away in libvirt to change the paramters of the ovs-vsctl? (currently I see only timeout)
That would explain why ovs-vsctl would not return anything, because it doesn't see the DB which lives inside the container. And I guess you're exposing the VHOST socket from the container (that /var/lib/vhost_sockets/sock* path), right? So what about exposing the DB socket too? On my machine ovs-vsctl tries to connect to /var/run/openvswitch/db.sock. What arguments would you like to pass to ovs-vsctl?
Just for my understanding the mtu with vhostuser will it just set the mtu in the ovs interface? Is it will also add the host_mtu flag to qemu args?
(In reply to Moshe Levi from comment #10) > Just for my understanding the mtu with vhostuser will it just set the mtu in > the ovs interface? > Is it will also add the host_mtu flag to qemu args? Yes, both. But if you're running ovs in a container, why not run your VM there too? Also, has exposing the DB socket helped? And what additional arguments would you like to pass to ovs-vsctl?
It a complex solution we have ovs on the host and we use ovs-dpdk in container just to do vdpa connectivity. It will be nice if we can add --db flag ovs-vsctl --db unix:/forwarder/var/run/openvswitch/db.sock show so libvirt can query like this. can I configure libvirt to run ovs-vsctl on different different db?
There's no way to do that without rebuilding your own libvirt. This patch should do the trick: diff --git i/src/util/virnetdevopenvswitch.c w/src/util/virnetdevopenvswitch.c index bd840bd3b7..ef87829634 100644 --- i/src/util/virnetdevopenvswitch.c +++ w/src/util/virnetdevopenvswitch.c @@ -57,6 +57,7 @@ virNetDevOpenvswitchCreateCmd(void) { virCommandPtr cmd = virCommandNew(OVS_VSCTL); virCommandAddArgFormat(cmd, "--timeout=%u", virNetDevOpenvswitchTimeout); + virCommandAddArgPair(cmd, "--db", "unix:/forwarder/var/run/openvswitch/db.sock"); return cmd; } Problem with allowing users to pass arbitrary arguments to ovs-vsctl is that they may interfere with whatever libvirt sets and render whole cmd line unusable. BTW: what's stopping you from exporting the db.sock path without the "/forwarder" prefix? What if something else tries to use ovs-vsctl from outside the container?
we have 2 ovs: one on the host /var/run/openvswitch/db.sock one on in the container /forwarder/var/run/openvswitch/db.sock Do you think adding the db support in libvirt.conf is an option?
It is possible, yeah. However, I don't think that mimicking --timeout is not enough. The problem is that one may have plenty of containers each with its own OVS. Setting one global DB path for all of guests won't allow individual approach. We might need to expose it in domain XML. But before I dig any deeper - has the patch from comment 13 helped? Or do you want me to provide a scratch build for you?
a scratch build will be nice. We have openstack which is quited complex so scratch build will help.
Here you go: https://mprivozn.fedorapeople.org/ovs/ What's contained in the scratch build? Mostly some patches to catch up with the upstream plus that patch from comment 13 which unconditionally passes --db path. For every command. So this may break some other use case where you want to communicate with ovs that's outside the container. But since this is only to test whether specifying db path would work I'd say it is okay.
thanks I will try and update you :)
Moshe, any update?
It appear that the xml was missing the vf netdevice <target dev='enp3s0f0v6'/> which avoid query the ovs and we are able to change the VF mtu. You can close this BUG as it was mis config from our side.
Very well. I'm closing this per comment 20.