Add the support of DISCARD and WRITE ZEROES commands, that have been introduced in the virtio-blk protocol to have better performance when using SSD backend. Linux driver (guest) already supports these features.
The series is upstream and these features will be included in QEMU 4.0 https://www.mail-archive.com/qemu-devel@nongnu.org/msg598631.html The series adds the support of DISCARD and WRITE_ZEROES commands and extends the virtio-blk-test to test these new commands.
(Reply to the Recording from Stefano in Epic) (Recording from Stefano in Epic ) (Recording from Cong Li in Epic) Hi Stefano, Sorry for the late reply. > > 1. I saw the updates in tests/virtio-blk-test.c when I checked > > the commit log, but actually QE do not use it for our testing, and > > I have no idea how the test scenario is, is it possible for you to > >help describe the steps then QE could new a test case in polarion and test it ? > Sure! > What is the QE environment? > If you want to test the features in a Linux guest, it is not simple > because a lot of things are hidden to the user-space. (e. g. what I > said for 'blkdiscard -z /dev/vda' to test write-zeroes). For this > reason, the QEMU tests use the libqos > (https://wiki.qemu.org/Features/qtest_driver_framework) Generally, only a fresh installed guest. No such qtest driver framework. > > 3. I still would like to confirm how to test max-discard-sectors > > and max-write-zeroes-sectors, what's the difference if I set different > > values, how to confirm it in the testing? > These values control the maximum size per-request that the driver can do, > so for example, if you set 1, the operation will be very slow, because the > driver must send multiple requests. Also in this case, they are hidden to > the user-space. Is there a reference for the performance improvement ? Thanks.
(In reply to CongLi from comment #5) > (Reply to the Recording from Stefano in Epic) > (Recording from Stefano in Epic ) > (Recording from Cong Li in Epic) > > Hi Stefano, > > Sorry for the late reply. > > > > 1. I saw the updates in tests/virtio-blk-test.c when I checked > > > the commit log, but actually QE do not use it for our testing, and > > > I have no idea how the test scenario is, is it possible for you to > > >help describe the steps then QE could new a test case in polarion and test it ? > > > Sure! > > What is the QE environment? > > If you want to test the features in a Linux guest, it is not simple > > because a lot of things are hidden to the user-space. (e. g. what I > > said for 'blkdiscard -z /dev/vda' to test write-zeroes). For this > > reason, the QEMU tests use the libqos > > (https://wiki.qemu.org/Features/qtest_driver_framework) > > Generally, only a fresh installed guest. > No such qtest driver framework. Okay, so in this case, the only tests that you can do are the ones that I mentioned: - discard # when discard disabled $ blkdiscard /dev/vda && echo "PASS" || echo "FAIL" blkdiscard: /dev/vda: BLKDISCARD ioctl failed: Operation not supported FAIL # when discard enabled $ blkdiscard /dev/vda && echo "PASS" || echo "FAIL" PASS - write-zeroes $ dd if=/dev/urandom of=/dev/vda bs=64k conv=fsync # fill the disk with random bytes $ dd if=/dev/vda bs=64k | tr -d '\0' | read -n 1 && echo "PASS (not all zeroes)" || echo "FAIL" PASS (not all zeroes) $ blkdiscard -z /dev/vda $ dd if=/dev/vda bs=64k | tr -d '\0' | read -n 1 && echo "FAIL" || echo "PASS (All zeroes)" 81920+0 records in 81920+0 records out PASS (All zeroes) Unfortunately, this test works also if write-zeroes is disabled because the Linux I/O subsystem will emulate the write-zeroes operation with a simple write. > > > > 3. I still would like to confirm how to test max-discard-sectors > > > and max-write-zeroes-sectors, what's the difference if I set different > > > values, how to confirm it in the testing? > > > These values control the maximum size per-request that the driver can do, > > so for example, if you set 1, the operation will be very slow, because the > > driver must send multiple requests. Also in this case, they are hidden to > > the user-space. > > Is there a reference for the performance improvement ? No, but I did some tests and I have these values using a raw image (5 GiB): - max-discard-sectors=1 $ time blkdiscard /dev/vda real 0m 12.59s user 0m 0.00s sys 0m 4.14s - max-discard-sectors=4194303 $ time blkdiscard /dev/vda real 0m 0.00s user 0m 0.00s sys 0m 0.00s - max-write-zeroes-sectors=1 $ time blkdiscard -z /dev/vda real 1m 1.33s user 0m 0.00s sys 0m 7.19s - max-write-zeroes-sectors=4194303 $ time blkdiscard -z /dev/vda real 0m 0.59s user 0m 0.00s sys 0m 0.00s I hope this can help.
Verified this bug on: kernel-4.18.0-96.el8.x86_64 qemu-kvm-4.0.0-2.module+el8.1.0+3258+4c45705b.x86_64 Same steps as BZ1692939#c6. Thanks.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3723