Bug 585247 - CFQ regression in 5.5 with sequential workloads
CFQ regression in 5.5 with sequential workloads
Status: CLOSED INSUFFICIENT_DATA
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel (Show other bugs)
5.5
x86_64 Linux
low Severity medium
: rc
: ---
Assigned To: Jeff Moyer
Red Hat Kernel QE team
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-04-23 10:24 EDT by Vivek Goyal
Modified: 2011-02-07 11:38 EST (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-02-07 11:38:39 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vivek Goyal 2010-04-23 10:24:11 EDT
Description of problem:

I was running some sequential workloads with CFQ, and I see a regression in 5.5
as compared to 5.4

I ran iostest with 5.4 and 5.5 kernels on machine storageqe02.rhts.eng.bos.redhat.com.

Host=storageqe-02.rhts.eng.bos.redhat.com Kernel=2.6.18-164.el5         
DIR=/mnt/iostestmnt/fio        DEV=/dev/cciss/c1d0p1         
Workload=bsr      iosched=cfq     Filesz=1G   bs=8K   
=========================================================================
job       Set NR  ReadBW(KB/s)   MaxClat(us)    WriteBW(KB/s)  MaxClat(us)    
---       --- --  ------------   -----------    -------------  -----------    
bsr       1   1   230014         52873          0              0              
bsr       1   2   173762         115622         0              0              
bsr       1   4   175933         323484         0              0              
bsr       1   8   171647         1215375        0              0              
bsr       1   16  168934         2859577        0              0              

Host=storageqe-02.rhts.eng.bos.redhat.com Kernel=2.6.18-191.el5         
DIR=/mnt/iostestmnt/fio        DEV=/dev/cciss/c1d0p1         
Workload=bsr      iosched=cfq     Filesz=1G   bs=8K   
=========================================================================
job       Set NR  ReadBW(KB/s)   MaxClat(us)    WriteBW(KB/s)  MaxClat(us)    
---       --- --  ------------   -----------    -------------  -----------    
bsr       1   1   174919         25672          0              0              
bsr       1   2   45996          126272         0              0              
bsr       1   4   46002          621047         0              0              
bsr       1   8   48712          823121         0              0              
bsr       1   16  46985          2082693        0              0              


Version-Release number of selected component (if applicable):

2.6.18-191.el5

How reproducible:

Always

Steps to Reproduce:
1. Run iostest
2.
3.
  
Actual results:


Expected results:


Additional info:
Comment 1 Jeff Moyer 2010-07-22 17:03:57 EDT
iostest seems to rely on bashisms that don't exist in my RHEL 5 version of bash.  So, I just ran a fio job file for 1 2 4 and 8 sequential readers.  I don't see this performance issue on my storage, an IBM 2810XIV.  Maybe I got the fio job file syntax wrong:

[global]
size=4096m
directory=/mnt/test/
ioscheduler=deadline   # changed this between runs
invalidate=1
runtime=30
time_based
rw=read

[bsr1]
nrfiles=1

[bsr2]
stonewall
nrfiles=2

[bsr4]
stonewall
nrfiles=4

[bsr8]
stonewall
nrfiles=8

I tested with kernel 2.6.18-194.el5.
Comment 2 Jeff Moyer 2010-08-26 15:20:38 EDT
I've tried to reproduce this on my HP EVA, and I am unable to.  In fact, the 5.5 kernel seems to perform better than the 5.4 kernel.  I'm not sure what to do with this bug at this point.
Comment 3 Jeff Moyer 2011-02-07 11:38:39 EST
Vivek, I'm closing this bug.  If you can reproduce it, feel free to re-open it.

Thanks!

Note You need to log in before you can comment on or make changes to this bug.