Hide Forgot
These are different tests than those in the RHEL6 bug 893327. That bug was using iozone, not fio. It would be useful to compare RHEL6/RHEL7 results for iozone. Regarding the results in comment 2, the qcow2 results are actually very good, where do you see a 10%-25% gap? For SSD, virtio-scsi has worse results than virtio-blk in write tests. For ramdisk, the results are very noisy, and the overall significance is not great except that virtio-scsi consumes more host CPU.
(In reply to Paolo Bonzini from comment #3) > These are different tests than those in the RHEL6 bug 893327. That bug was > using iozone, not fio. It would be useful to compare RHEL6/RHEL7 results > for iozone. Iozone is the test tools that we used before, now we use fio benchmark to test related block performance. > Regarding the results in comment 2, the qcow2 results are actually very > good, where do you see a 10%-25% gap? We could see 10%-25% gap in "BW/CPU" column. > For SSD, virtio-scsi has worse results than virtio-blk in write tests. There is also performance gap for read tests in "BW/CPU" aspect.
> Iozone is the test tools that we used before, now we use fio benchmark to test > related block performance. Can you run fio for RHEL6 and/or iozone for RHEL7? Otherwise it's difficult to compare. > There is also performance gap for read tests in "BW/CPU" aspect. BW/CPU is not performance, it is performance divided by host CPU usage. Only MB/sec and IOPS is performance.
(In reply to Paolo Bonzini from comment #5) > > Iozone is the test tools that we used before, now we use fio benchmark to test > > related block performance. > > Can you run fio for RHEL6 and/or iozone for RHEL7? Otherwise it's difficult > to compare. Sure, i am working against it, once test results come out, i will update the comment.
(In reply to Paolo Bonzini from comment #8) > Is this visible for virtio-blk as well? Or is it specific to virtio-scsi? Forgot to mention in comment #9 that there has been exited Bug 966398 to track the issue.
> Sure. But we could still see big IOPS boost with the same guest RHEL7.0 on > ramdisk storage backend. Interesting, this means my theory was wrong. :) It could be due to many factors, perhaps RHEL7 host kernel improvements... Anyhow, the RHEL7 vs. RHEL6 performance doesn't seem to be very different for virtio-scsi and virtio-blk. The host CPU usage can be tracked by bug 966398 (thanks for pointing me to it). The main issue tracked by this bug should be "virtio-scsi has worse results than virtio-blk in write tests". This is where we have less noise in the ramdisk results, and consistently worse performance in the SSD tests (raw and qcow2). Thanks!