Bug 616383 - Poor IO performance inside the guests (about 20-30% of the LUN speed)
Summary: Poor IO performance inside the guests (about 20-30% of the LUN speed)
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kvm
Version: 5.5
Hardware: All
OS: Linux
high
urgent
Target Milestone: rc
: ---
Assignee: chellwig@redhat.com
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: Rhel5KvmTier1
TreeView+ depends on / blocked
 
Reported: 2010-07-20 10:22 UTC by Dan Yasny
Modified: 2018-12-01 18:49 UTC (History)
20 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-09-15 06:39:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
fdisk output from the host node. (12.44 KB, text/plain)
2010-10-26 14:48 UTC, Lee Yarwood
no flags Details

Description Dan Yasny 2010-07-20 10:22:06 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 Dan Yasny 2010-07-20 10:27:39 UTC
I hit enter too early...

Anyhow:
Description of problem:
slow IO performance from physical to rhev vm's

Physical Host - Windows 2008 R2. 1GB Ethernet.

VM - Windows 2008 R2. 1GB Ethernet. 4GB Memory. 4 Cores( 2 Sockets / 2 Per Socket.) System disk (sparse) - Data disk where copy is happening pre-allocated. All using VirtIO.  RHEV-Block64 2.2.45993 & RHEV-Network64 2.2.45993 (tools & agent also installed).
RHEV - 3x 1GB network card bond. iscsi01 and iscsi02 networks.  

Test Scenario 1

Physical servers connects to VM via \\VMname\d$ and drag drops to copy a file. Windows starts at 60MB/s and decreased to about 35MB/s.

Test Scenario 2

Physical servers connect to Physical via \\Physical\d$ and drag drops to copy a file. (same file as above).  Get between 105 and 100 MB/s.  d$ on this physical server is a iscsi initiated path to the same MD3000i where the RHEV VM's reside.

Additonal Information

Physical (Local Disk) - Remote Physical (MD3000i Disk) = 100MB/s
Physical (Local Disk) - Md3000i Disk (same physical) = 120MB/s
Virtual - Physical = 87MB/s.

So the read performance from Virtual to Physical is much better than the write


Also tested with additional configs (windows guest +IDE, linux guest + IDE/virtIO ):
Windows IDE drive (no drivers / tools loaded)
P -> V -  starts at 60 MB/s end at 30 MB/s and VM VERY SLOW
V -> P - 48 MB/sec

Fedora 13 IDE (no drivers / tools loaded)
P -> V - 25MB/s
V -> P - 55MB/s

Fedora 13 Virtio (no drivers / tools loaded)
P -> V - 22MB/s  (rsync)
V -> P - 58MB/s  (rsync)

The fedora tests were a rsync using CIFS to a windows share 

Version-Release number of selected component (if applicable):
RHEV 2.2 GA
etherboot-zroms-kvm-5.4.4-13.el5
kvm-debuginfo-83-164.el5_5.12
kvm-83-164.el5_5.12
kvm-qemu-img-83-164.el5_5.12
kmod-kvm-83-164.el5_5.12
kvm-tools-83-164.el5_5.12
kernel 2.6.18-194.3.1.el5

How reproducible:
always

Steps to Reproduce:
1.cobduct tests described above
2.
3.

Actual results:
4-5 times slower IO speeds inside VMs compared to physical storage IO

Expected results:
IO speeds close to physical

Additional info:    
<kwolf> dyasny: P -> V means write and V -> P means read?
 dyasny: And do you remember what the numbers were for Win/virtio-blk?
<dyasny> kwolf: yes
 kwolf: in summary, everything inside a VM, no matter how it is configured or what guest OS, is about 30-30Mbps, while raw iscsi is 130
 kwolf: cause for a BZ?
<kwolf> dyasny: MB/s as in megabytes, not Mbps as in megabits, right?
<dyasny> kwolf: megabytes
<kwolf> dyasny: You get those 130 when you run the test directly on the host?
<dyasny> kwolf: so, in short inside a VM disk IO is at least 4 times slower
 kwolf: yes, exactly. dd into a LUN is about 130, inside a VM it's 20-30
<kwolf> dyasny: Yeah, that sounds wrong.
<dyasny> kwolf: both IDE and virtIO, so I guess it's another layer
<kwolf> dyasny: We did already check the obvious configuration problems last time in IRC, right? Anyway, a BZ wouldn't hurt to make it more persistent.

Comment 2 Issue Tracker 2010-07-20 11:40:51 UTC
Event posted on 07-20-2010 12:40pm BST by lyarwood

Hi All,

Another update from the customer below :

--8<--

Hi,

As a further test I created a new data storage (500gb lun - this worked on
windows directly with 120MB/s). I created a VM on this data storage.

Results:  P–V Max 50MB/s , V-V 80 -90 MB/s.

Still not good performance.  When we did the raw dd test to the LUN, we
were getting the 128MB/s. There must be something in the layers after
this... 

-->8--

Thanks 

Lee


This event sent from IssueTracker by lyarwood 
 issue 1132963

Comment 3 Issue Tracker 2010-07-22 10:45:50 UTC
Event posted on 07-22-2010 11:45am BST by lyarwood

Hi all,

So the customer came back today and raised a management escalation,
couldn't find you on IRC so decided to update here.

Have we got anything that we can pass on to the customer atm? This will
end up getting raised to a Sev1 tomorrow (Customers Live date) if not.

Thanks,

Lee


This event sent from IssueTracker by lyarwood 
 issue 1132963

Comment 6 Bill Burns 2010-09-03 14:36:50 UTC
Please provide the rhel5 specific version of qemu/kernel or rhev, virtio version, host cpu info, kvm_stat output.

Comment 7 Dor Laor 2010-09-05 21:36:22 UTC
Is qcow2 used? Note that even for preallocated, it can effect the read ahead and locality performance.

Comment 36 chellwig@redhat.com 2011-07-28 11:14:52 UTC
Please report test results with the updated drivers.

Comment 37 Dan Yasny 2011-07-28 13:12:09 UTC
(In reply to comment #36)
> Please report test results with the updated drivers.

All the cases are closed off right now. If the test results the QE reported are comparable to real hardware speeds, I suppose we can close this BZ

Comment 38 Ronen Hod 2011-09-12 17:42:26 UTC
QE, please test with the latest virtio-win blk driver (RHEL6.2).

Comment 39 Suqin Huang 2011-09-14 11:48:32 UTC
Dan,

seems comment1 and comment24 are not the same, I will provide comment1 test result.


Note You need to log in before you can comment on or make changes to this bug.