Bug 656851
| Summary: | KVM: KVM IO Poor performance ( COW Sparse disk on NFS ) | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 5 | Reporter: | Oded Ramraz <oramraz> | ||||
| Component: | kvm | Assignee: | Kevin Wolf <kwolf> | ||||
| Status: | CLOSED WONTFIX | QA Contact: | Virtualization Bugs <virt-bugs> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | 5.5.z | CC: | kwolf, mkenneth, tburke, virt-maint | ||||
| Target Milestone: | rc | ||||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2011-01-16 22:40:46 UTC | Type: | --- | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | |||||||
| Bug Blocks: | 580949 | ||||||
| Attachments: |
|
||||||
|
Description
Oded Ramraz
2010-11-24 10:59:13 UTC
I'm using IDE disk and using Virio network adapter ( Installed with latest 2.2 RHEV-tools ) Needed info to isolate the problem - Try Linux guest to isolate winXp and ide/virtio driver issues - Don't copy files but use standard benchmark test or DD - Does the host test uses NFS as well? - When you use DD, try it with odirect mode (the dd command, in addition to qemu) If it's qcow2 only, it's most likely due to the metadata flushes. I'm working on getting the impact of them small in upstream by batching requests, but block-queue is even there a very intrusive patch and I don't see any chance to backport it to RHEL 5. There are a few patches that can reduce the flushes a bit and should be possible to backport to RHEL 5, but they won't be able to compensate for the whole impact. In any case, the performance should be better as soon as you start working on already allocated clusters (or if you preallocate metadata), just the initial growth is slow. Can you confirm you see this behaviour? I try to run my tests again with RAW ( both preallocated and Sparse ) and i didn't saw the problem. It might happen only with qcow2. Oded, do you have the data from comment #2? (In reply to comment #2) > Needed info to isolate the problem > > - Try Linux guest to isolate winXp and ide/virtio driver issues > - Don't copy files but use standard benchmark test or DD > - Does the host test uses NFS as well? > - When you use DD, try it with odirect mode (the dd command, in addition to > qemu) I'm running IOMeter benchmark from a VM snapshot ( COW Sparse ) based on COW Sparse template . I'm using NFS storage. I tried to reproduce this issue with RAW images on NFS ( Both Sparse / Preallocated ) without success. I also tried to reproduce this issue on ISCSI: ( RAW Preallocated , COW Sparse ) without success. I assume that the problem is qcow2 related when using NFS storage. If we need to test other kind of guests / DD odirect modes in order to isolate the problem please ask KVM QE guys to perform these tests. What do you mean there is no problem? What are the performance? I run IOMeter benchmark again with NFS setup: 32kb IO size 50% read 50% write sequential write. For RAW / Sparse and RAW preallocated I've got about 4 MB/Sec (2 read and 2 write) For COW Sparse i got about 2.5 MB/Sec ( 1.25 read and 1.25 Write) I'm not sure that this issue is relevant for qcow2 disk type only. Created attachment 463378 [details]
IOMeter benchmark results
The above is pretty slow even for raw. We might want to test it on a faster server. |