Hide Forgot
Multi-queue improves the performance of the block layer in several scenarios. Needs to be explicitly enabled via libvirt. +++ This bug was initially created as a clone of Bug #1378533 +++ +++ This bug was initially created as a clone of Bug #1378532 +++ Multi-queue support for virtio-blk.
virtio-blk multiqueue only makes a difference for local SSDs assigned to virtual machines. Does RHV support that (without multiqueue)? If not, there should be a separate bug for local SSD support and it should block this bug.
I don't think RHV does support local SSDs (well only if using a local Storage Domain, and these could be SSDs of course), but then there is no longer any live migration possible. Leaving this on Yaniv to decide.
(In reply to Martin Tessun from comment #2) > I don't think RHV does support local SSDs (well only if using a local > Storage Domain, and these could be SSDs of course), but then there is no > longer any live migration possible. > > Leaving this on Yaniv to decide. Would it make a different for local storage exposed from Gluster in HCI?
> but then there is no longer any live migration possible. Yes, of course. For this reason it's more useful for OpenStack. > Would it make a different for local storage exposed from Gluster in HCI? What is HCI? I think Gluster introduces overhead. In the case I'm talking about, the VM is able to push hundreds of kIOPS to the storage.
(In reply to Paolo Bonzini from comment #4) > > but then there is no longer any live migration possible. > > Yes, of course. For this reason it's more useful for OpenStack. > > > Would it make a different for local storage exposed from Gluster in HCI? > > What is HCI? I think Gluster introduces overhead. In the case I'm talking > about, the VM is able to push hundreds of kIOPS to the storage. HCI = Hyper Converged Infrastructure. As HCI more or less uses gluster as network filesystem, I don't think that having multi queue support here will increase the throughput.
Please explain what you mean by local SSD support.
Disks that are physically attached to the server that hosts the virtual machine, with transient data (destroyed when you stop or delete the instance) but higher throughput and lower latency.
Currently, it will cause this error when creating snapshot on NFS storagedomain(RHV-4.1.8.2-0.1.el7 libvirt-3.9.0-5.el7.x86_64 qemu-kvm-rhev-2.10.0-11.el7.x86_64 vdsm-4.19.41-1.el7ev.x86_64): 2017-12-07 21:45:43,286-0500 INFO (jsonrpc/3) [vdsm.api] FINISH getQemuImageInfo error=cmd=['/usr/bin/qemu-img', 'info', '--output', 'json', '-f', 'qcow2', u'/rhev/data-center/mnt/10.73.194.27:_vol_S3_libvirtmanual_yanqzhan_rhv__nfs__1/6fdb0d21-3910-4721-9b66-f554b7e06269/images/70cc7ab9-aa5f-40c3-861b-580da60614a7/40574899-a51f-4f24-aa7a-e84a21942c7c'], ecode=1, stdout=, stderr=qemu-img: Could not open '/rhev/data-center/mnt/10.73.194.27:_vol_S3_libvirtmanual_yanqzhan_rhv__nfs__1/6fdb0d21-3910-4721-9b66-f554b7e06269/images/70cc7ab9-aa5f-40c3-861b-580da60614a7/40574899-a51f-4f24-aa7a-e84a21942c7c': Failed to get shared "write" lock Vdsm should check qemu version. And add -U to check an image being used by VM when qemu>=2.10.
(In reply to Han Han from comment #8) > Currently, it will cause this error when creating snapshot on NFS > storagedomain(RHV-4.1.8.2-0.1.el7 libvirt-3.9.0-5.el7.x86_64 > qemu-kvm-rhev-2.10.0-11.el7.x86_64 vdsm-4.19.41-1.el7ev.x86_64): > > 2017-12-07 21:45:43,286-0500 INFO (jsonrpc/3) [vdsm.api] FINISH > getQemuImageInfo error=cmd=['/usr/bin/qemu-img', 'info', '--output', 'json', > '-f', 'qcow2', > u'/rhev/data-center/mnt/10.73.194.27: > _vol_S3_libvirtmanual_yanqzhan_rhv__nfs__1/6fdb0d21-3910-4721-9b66- > f554b7e06269/images/70cc7ab9-aa5f-40c3-861b-580da60614a7/40574899-a51f-4f24- > aa7a-e84a21942c7c'], ecode=1, stdout=, stderr=qemu-img: Could not open > '/rhev/data-center/mnt/10.73.194.27: > _vol_S3_libvirtmanual_yanqzhan_rhv__nfs__1/6fdb0d21-3910-4721-9b66- > f554b7e06269/images/70cc7ab9-aa5f-40c3-861b-580da60614a7/40574899-a51f-4f24- > aa7a-e84a21942c7c': Failed to get shared "write" lock > > Vdsm should check qemu version. And add -U to check an image being used by > VM when qemu>=2.10. BTW, this is related to image locking (see bug 1415252), not multi-queue.
Our default moved virtio-scsi. We do not plan to invest in virtio-blk. There is also no policy to when to use this feature.
BZ<2>Jira Resync