Bug 1253610
Summary: | [virtio-win][vioscsi] Throughput with queue=4 is worse than queue=1 | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Yanhui Ma <yama> |
Component: | virtio-win | Assignee: | Vadim Rozenfeld <vrozenfe> |
virtio-win sub component: | virtio-win-prewhql | QA Contact: | Yanhui Ma <yama> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | high | ||
Priority: | medium | CC: | coli, jherrman, juzhang, lijin, michen, phou, vrozenfe, wquan, wyu, xfu, xuwei, yama, yisun, yuhuang, zhenyzha |
Version: | 7.2 | ||
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Windows | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
The virtio-scsi driver previously consumed excessive system resources when used in multi-queue mode. This update optimizes queue-locking in multi-queue configurations, which significantly improves the performance of virtio-scsi in multi-queue mode.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2018-10-30 16:21:49 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 911394, 1288337, 1401400, 1473046, 1558125, 1940642 |
Comment 3
Vadim Rozenfeld
2016-03-23 00:43:58 UTC
(In reply to Vadim Rozenfeld from comment #3) > Can we re-test this issue with more resent drivers ( build 116 > http://download.devel.redhat.com/brewroot/packages/virtio-win-prewhql/0.1/ > 116/win/virtio-win-prewhql-0.1.zip ) on Windows Server 2012R2 ? > > Thanks, > Vadim. ok, will update results asap. (In reply to Vadim Rozenfeld from comment #3) > Can we re-test this issue with more resent drivers ( build 116 > http://download.devel.redhat.com/brewroot/packages/virtio-win-prewhql/0.1/ > 116/win/virtio-win-prewhql-0.1.zip ) on Windows Server 2012R2 ? Since there is a bug about build 116 (bz1321774), we failed to re-test this issue with it. (In reply to Vadim Rozenfeld from comment #3) > Can we re-test this issue with more resent drivers ( build 116 > http://download.devel.redhat.com/brewroot/packages/virtio-win-prewhql/0.1/ > 116/win/virtio-win-prewhql-0.1.zip ) on Windows Server 2012R2 ? > > Thanks, > Vadim. Here are results with virtio-win-prewhql-0.1-121 on Windows Server 2012R2. http://kvm-perf.englab.nay.redhat.com/results/request/bug1253610/multiqueue+virtio_win_prewhql121/Win2012r2.x86_64.fio_win.html Multiqueue performance is still worse than single queue. (In reply to Yanhui Ma from comment #6) > (In reply to Vadim Rozenfeld from comment #3) > > Can we re-test this issue with more resent drivers ( build 116 > > http://download.devel.redhat.com/brewroot/packages/virtio-win-prewhql/0.1/ > > 116/win/virtio-win-prewhql-0.1.zip ) on Windows Server 2012R2 ? > > > > Thanks, > > Vadim. > > Here are results with virtio-win-prewhql-0.1-121 on Windows Server 2012R2. > http://kvm-perf.englab.nay.redhat.com/results/request/bug1253610/ > multiqueue+virtio_win_prewhql121/Win2012r2.x86_64.fio_win.html > > Multiqueue performance is still worse than single queue. thanks a lot. we are working on improving mq performance. Vadim. Test it on NVMe backend with hv_time working, using following qemu and kernel version: qemu-kvm-rhev-2.10.0-21.el7.x86_64 kernel-3.10.0-855.el7.x86_64 virtio-win: 00.75.104.1450 Multi queues's performance is still slower than single queue: http://kvm-perf.englab.nay.redhat.com/results/request/16_queues_vs_1_queue/NVMe/hv_time/repreat/raw.virtio_scsi.*.Win2016.x86_64.html Test it on NVMe backend, using following qemu and kernel version: qemu-kvm-rhev-2.12.0-7.el7.x86_64 kernel-3.10.0-919.el7.x86_64 virtio-win: 62.76.104.155 Compared with results on comment 8, there are indeed improvements for both multi and single queues. And I think performance between multi queues and single queue are almost the same now. http://kvm-perf.englab.nay.redhat.com/results/request/bug1253610/multiqueue+virtio_win_prewhql155/raw.virtio_scsi.*.Win2012.x86_64.html Thanks a lot, Yanhui Ma. Just a quick question. I wonder if the NVMe device that was used for testing supports "mq-deadline" scheduling (something like "cat /sys/blocknvmeXXX/queue/scheduler " should give an idea if it does) And if it does, is there any difference between none and mq-dedline (we don't need to rerun the entire test, just 4K 64QD test to see if it makes a difference). Best, Vadim. (In reply to Vadim Rozenfeld from comment #14) > Thanks a lot, Yanhui Ma. > > Just a quick question. I wonder if the NVMe device that was used for testing > supports "mq-deadline" scheduling (something like "cat > /sys/blocknvmeXXX/queue/scheduler " should give an idea if it does) And if > it does, is there any difference between none and mq-dedline (we don't need > to rerun the entire test, just 4K 64QD test to see if it makes a difference). > > Best, > Vadim. Hi Vadim, I just compared none with mq-deadline scheduler with 4 queues. It seems no difference between them. Here are results: http://kvm-perf.englab.nay.redhat.com/results/request/bug1253610/multiqueue+virtio_win_prewhql155/none_vs_mq-deadline/raw.virtio_scsi.*.Win2012.x86_64.html (In reply to Yanhui Ma from comment #15) > (In reply to Vadim Rozenfeld from comment #14) > > Thanks a lot, Yanhui Ma. > > > > Just a quick question. I wonder if the NVMe device that was used for testing > > supports "mq-deadline" scheduling (something like "cat > > /sys/blocknvmeXXX/queue/scheduler " should give an idea if it does) And if > > it does, is there any difference between none and mq-dedline (we don't need > > to rerun the entire test, just 4K 64QD test to see if it makes a difference). > > > > Best, > > Vadim. > > Hi Vadim, > > I just compared none with mq-deadline scheduler with 4 queues. It seems no > difference between them. Here are results: > > http://kvm-perf.englab.nay.redhat.com/results/request/bug1253610/ > multiqueue+virtio_win_prewhql155/none_vs_mq-deadline/raw.virtio_scsi.*. > Win2012.x86_64.html Thanks a lot, Vadim. Hi,vadim base on above result, could we set it to verified ? Thanks, wenli (In reply to Quan Wenli from comment #17) > Hi,vadim > > base on above result, could we set it to verified ? > > Thanks, wenli Hi Wenli. I hope so. Just keep in mind to keep testing single queue vs. multi gueue performance in future releases to make sure that there is no regression. All the best, Vadim. (In reply to Vadim Rozenfeld from comment #18) > (In reply to Quan Wenli from comment #17) > > Hi,vadim > > > > base on above result, could we set it to verified ? > > > > Thanks, wenli > > Hi Wenli. > > I hope so. Just keep in mind to keep testing single queue vs. multi gueue > performance in future releases to make sure that there is no regression. > Sure, will keeping test it. But I need to confirm expected result with you, should it be that performance of multi queue is better than single queue or just no regression? > All the best, > Vadim. (In reply to Yanhui Ma from comment #19) > (In reply to Vadim Rozenfeld from comment #18) > > (In reply to Quan Wenli from comment #17) > > > Hi,vadim > > > > > > base on above result, could we set it to verified ? > > > > > > Thanks, wenli > > > > Hi Wenli. > > > > I hope so. Just keep in mind to keep testing single queue vs. multi gueue > > performance in future releases to make sure that there is no regression. > > > > Sure, will keeping test it. But I need to confirm expected result with you, > should it be that performance of multi queue is better than single queue or > just no regression? I think that both cases, dropping multi queue vs. single queue performance, and dropping performance in general, should be treated as regression. Best regards, Vadim. > > > All the best, > > Vadim. Based on comment15-18, set it verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3413 |