Bug 893327

Summary: virtio_scsi performance 20%+ worse than virtio_blk in some scenarios
Product: Red Hat Enterprise Linux 6 Reporter: Xiaomei Gao <xigao>
Component: qemu-kvmAssignee: Fam Zheng <famz>
Status: CLOSED DEFERRED QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: high    
Version: 6.4CC: akong, areis, bmcclain, bsarathy, chayang, famz, juzhang, kwolf, lyarwood, michen, mkenneth, pablo.iranzo, pbonzini, qzhang, rbalakri, tcarlin, tvvcox, virt-maint, wquan, xigao
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 999304 (view as bug list) Environment:
Last Closed: 2014-12-15 09:42:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1106420    
Bug Blocks: 999304, 1002699    

Comment 4 Paolo Bonzini 2013-10-24 15:26:26 UTC
The main difference here is that vcpu0 is always at 100% in the virtio-scsi tests.  The load is much more balanced between vcpu0 and vcpu1 for virtio-blk.

This applies to both qcow2 and raw actually, but we only see worse performance from it in qcow2.  I think we should first analyze/fix this fairness issue to see whether it affects performance, because it's "weird".

Comment 5 Amos Kong 2013-10-29 03:18:29 UTC
Hi xgao,

can you list the host numanode info and the pin setup info?

Comment 6 Xiaomei Gao 2013-10-30 01:56:18 UTC
(In reply to Amos Kong from comment #5)
> Hi xgao,
> 
> can you list the host numanode info and the pin setup info?

Irqblance is running on both host and guest, we didn't do any pin setup on host and guest.

[root@hp-z800-06 ~]# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3
node 0 size: 8175 MB
node 0 free: 7287 MB
node 1 cpus: 4 5 6 7
node 1 size: 8192 MB
node 1 free: 7906 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10

Comment 9 Xiaomei Gao 2014-04-03 02:42:39 UTC
(In reply to Fam Zheng from comment #8)
> Xiaomei,
> 
> This bug has been around for a while and I know we are switching to fio.
> Does latest performance tests have such vcpu fairness measurement (e.g. on
> 6.5, 6.6 and new qemu-kvm)? Can you help confirm if the unfairness is still
> observed?

Okay, We will test the latest 6.6 qemu-kvm and see if the issue still happened. We will update the comment once fresh results are on hand.

Comment 10 Ademar Reis 2014-05-28 11:52:15 UTC
(In reply to Xiaomei Gao from comment #9)
> (In reply to Fam Zheng from comment #8)
> > Xiaomei,
> > 
> > This bug has been around for a while and I know we are switching to fio.
> > Does latest performance tests have such vcpu fairness measurement (e.g. on
> > 6.5, 6.6 and new qemu-kvm)? Can you help confirm if the unfairness is still
> > observed?
> 
> Okay, We will test the latest 6.6 qemu-kvm and see if the issue still
> happened. We will update the comment once fresh results are on hand.

Keeping the needinfo until you have the test results. Thanks.