Bug 1186914
Summary: | RHEL6 qemu-kvm: backport cache=directsync | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Ademar Reis <areis> |
Component: | qemu-kvm | Assignee: | Stefan Hajnoczi <stefanha> |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | unspecified | Docs Contact: | Dayle Parker <dayleparker> |
Priority: | high | ||
Version: | 6.6 | CC: | chayang, jentrena, jherrman, juzhang, kwolf, mkenneth, pbonzini, pdwyer, qzhang, rbalakri, rpacheco, salmy, virt-maint, wquan, xigao |
Target Milestone: | rc | Keywords: | FutureFeature, Performance |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | qemu-kvm-0.12.1.2-2.453.el6 | Doc Type: | Release Note |
Doc Text: |
qemu-kvm supports directsync cache mode on virtual disks
With this update, qemu-kvm supports the "cache=directsync" option in the host file, which enables the use of the directsync cache mode on virtual disks. When "cache=directsync" is set on the virtual disk (configured in the guest XML or the virt-manager application), write operations on the virtual machine are only completed when data is safely on the disk. This increases data security during file transactions between virtual machines, and also improves performance by allowing I/O from the guest to bypass the host page cache.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2015-07-22 06:08:50 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1185250, 1198205 |
Description
Ademar Reis
2015-01-28 21:01:23 UTC
Fix included in qemu-kvm-0.12.1.2-2.453.el6 Verify the bug on kernel-2.6.32-545.el6.x86_64 and qemu-kvm-0.12.1.2-2.458.el6.x86_64, and here is the comparison of directsync and writethrough when performing multiple small writes on a file opened with O_DSYNC flag. 1. # for i in `seq 1 5`; do fio --ioengine=sync --readwrite=write --bs=1k --size=50m --fsync=1 --name=test --filename=/dev/vda | grep iops; done cache=writethrough write: io=51200KB, bw=4288.5KB/s, iops=4288, runt= 11939msec write: io=51200KB, bw=4812.3KB/s, iops=4812, runt= 10640msec write: io=51200KB, bw=4853.9KB/s, iops=4853, runt= 10550msec write: io=51200KB, bw=4856.4KB/s, iops=4856, runt= 10543msec write: io=51200KB, bw=4855.4KB/s, iops=4855, runt= 10545msec cache=directsync write: io=51200KB, bw=5734.8KB/s, iops=5734, runt= 8928msec write: io=51200KB, bw=5472.5KB/s, iops=5472, runt= 9356msec write: io=51200KB, bw=5449.8KB/s, iops=5449, runt= 9395msec write: io=51200KB, bw=4796.3KB/s, iops=4796, runt= 10675msec write: io=51200KB, bw=5573.1KB/s, iops=5573, runt= 9187msec 2. # for i in `seq 1 5`; do java Java2Disk /dev/vda 1000 100 rws; done cache=writethrough Sync Test rws - msec to perform 1000= 212 SyncsPerSec= 4716.981 Sync Test rws - msec to perform 1000= 213 SyncsPerSec= 4694.8354 Sync Test rws - msec to perform 1000= 206 SyncsPerSec= 4854.369 Sync Test rws - msec to perform 1000= 204 SyncsPerSec= 4901.961 Sync Test rws - msec to perform 1000= 206 SyncsPerSec= 4854.369 cache=directsync Sync Test rws - msec to perform 1000= 159 SyncsPerSec= 6289.3086 Sync Test rws - msec to perform 1000= 157 SyncsPerSec= 6369.427 Sync Test rws - msec to perform 1000= 159 SyncsPerSec= 6289.3086 Sync Test rws - msec to perform 1000= 155 SyncsPerSec= 6451.613 Sync Test rws - msec to perform 1000= 158 SyncsPerSec= 6329.114 There is ~14.2% IOPS boost in scenario 1 and ~32.08% performance improvement in scenario 2 when performing multiple small writes on a file opened with O_DSYNC flag, so set the bug verified status. If there is something wrong, please correct it. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-1275.html |