Description of problem: Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
I hit enter too early... Anyhow: Description of problem: slow IO performance from physical to rhev vm's Physical Host - Windows 2008 R2. 1GB Ethernet. VM - Windows 2008 R2. 1GB Ethernet. 4GB Memory. 4 Cores( 2 Sockets / 2 Per Socket.) System disk (sparse) - Data disk where copy is happening pre-allocated. All using VirtIO. RHEV-Block64 2.2.45993 & RHEV-Network64 2.2.45993 (tools & agent also installed). RHEV - 3x 1GB network card bond. iscsi01 and iscsi02 networks. Test Scenario 1 Physical servers connects to VM via \\VMname\d$ and drag drops to copy a file. Windows starts at 60MB/s and decreased to about 35MB/s. Test Scenario 2 Physical servers connect to Physical via \\Physical\d$ and drag drops to copy a file. (same file as above). Get between 105 and 100 MB/s. d$ on this physical server is a iscsi initiated path to the same MD3000i where the RHEV VM's reside. Additonal Information Physical (Local Disk) - Remote Physical (MD3000i Disk) = 100MB/s Physical (Local Disk) - Md3000i Disk (same physical) = 120MB/s Virtual - Physical = 87MB/s. So the read performance from Virtual to Physical is much better than the write Also tested with additional configs (windows guest +IDE, linux guest + IDE/virtIO ): Windows IDE drive (no drivers / tools loaded) P -> V - starts at 60 MB/s end at 30 MB/s and VM VERY SLOW V -> P - 48 MB/sec Fedora 13 IDE (no drivers / tools loaded) P -> V - 25MB/s V -> P - 55MB/s Fedora 13 Virtio (no drivers / tools loaded) P -> V - 22MB/s (rsync) V -> P - 58MB/s (rsync) The fedora tests were a rsync using CIFS to a windows share Version-Release number of selected component (if applicable): RHEV 2.2 GA etherboot-zroms-kvm-5.4.4-13.el5 kvm-debuginfo-83-164.el5_5.12 kvm-83-164.el5_5.12 kvm-qemu-img-83-164.el5_5.12 kmod-kvm-83-164.el5_5.12 kvm-tools-83-164.el5_5.12 kernel 2.6.18-194.3.1.el5 How reproducible: always Steps to Reproduce: 1.cobduct tests described above 2. 3. Actual results: 4-5 times slower IO speeds inside VMs compared to physical storage IO Expected results: IO speeds close to physical Additional info: <kwolf> dyasny: P -> V means write and V -> P means read? dyasny: And do you remember what the numbers were for Win/virtio-blk? <dyasny> kwolf: yes kwolf: in summary, everything inside a VM, no matter how it is configured or what guest OS, is about 30-30Mbps, while raw iscsi is 130 kwolf: cause for a BZ? <kwolf> dyasny: MB/s as in megabytes, not Mbps as in megabits, right? <dyasny> kwolf: megabytes <kwolf> dyasny: You get those 130 when you run the test directly on the host? <dyasny> kwolf: so, in short inside a VM disk IO is at least 4 times slower kwolf: yes, exactly. dd into a LUN is about 130, inside a VM it's 20-30 <kwolf> dyasny: Yeah, that sounds wrong. <dyasny> kwolf: both IDE and virtIO, so I guess it's another layer <kwolf> dyasny: We did already check the obvious configuration problems last time in IRC, right? Anyway, a BZ wouldn't hurt to make it more persistent.
Event posted on 07-20-2010 12:40pm BST by lyarwood Hi All, Another update from the customer below : --8<-- Hi, As a further test I created a new data storage (500gb lun - this worked on windows directly with 120MB/s). I created a VM on this data storage. Results: PāV Max 50MB/s , V-V 80 -90 MB/s. Still not good performance. When we did the raw dd test to the LUN, we were getting the 128MB/s. There must be something in the layers after this... -->8-- Thanks Lee This event sent from IssueTracker by lyarwood issue 1132963
Event posted on 07-22-2010 11:45am BST by lyarwood Hi all, So the customer came back today and raised a management escalation, couldn't find you on IRC so decided to update here. Have we got anything that we can pass on to the customer atm? This will end up getting raised to a Sev1 tomorrow (Customers Live date) if not. Thanks, Lee This event sent from IssueTracker by lyarwood issue 1132963
Please provide the rhel5 specific version of qemu/kernel or rhev, virtio version, host cpu info, kvm_stat output.
Is qcow2 used? Note that even for preallocated, it can effect the read ahead and locality performance.
Please report test results with the updated drivers.
(In reply to comment #36) > Please report test results with the updated drivers. All the cases are closed off right now. If the test results the QE reported are comparable to real hardware speeds, I suppose we can close this BZ
QE, please test with the latest virtio-win blk driver (RHEL6.2).
Dan, seems comment1 and comment24 are not the same, I will provide comment1 test result.