Bug 585247 - CFQ regression in 5.5 with sequential workloads
Summary: CFQ regression in 5.5 with sequential workloads
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel
Version: 5.5
Hardware: x86_64
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Jeff Moyer
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-04-23 14:24 UTC by Vivek Goyal
Modified: 2011-02-07 16:38 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-02-07 16:38:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Vivek Goyal 2010-04-23 14:24:11 UTC
Description of problem:

I was running some sequential workloads with CFQ, and I see a regression in 5.5
as compared to 5.4

I ran iostest with 5.4 and 5.5 kernels on machine storageqe02.rhts.eng.bos.redhat.com.

Host=storageqe-02.rhts.eng.bos.redhat.com Kernel=2.6.18-164.el5         
DIR=/mnt/iostestmnt/fio        DEV=/dev/cciss/c1d0p1         
Workload=bsr      iosched=cfq     Filesz=1G   bs=8K   
=========================================================================
job       Set NR  ReadBW(KB/s)   MaxClat(us)    WriteBW(KB/s)  MaxClat(us)    
---       --- --  ------------   -----------    -------------  -----------    
bsr       1   1   230014         52873          0              0              
bsr       1   2   173762         115622         0              0              
bsr       1   4   175933         323484         0              0              
bsr       1   8   171647         1215375        0              0              
bsr       1   16  168934         2859577        0              0              

Host=storageqe-02.rhts.eng.bos.redhat.com Kernel=2.6.18-191.el5         
DIR=/mnt/iostestmnt/fio        DEV=/dev/cciss/c1d0p1         
Workload=bsr      iosched=cfq     Filesz=1G   bs=8K   
=========================================================================
job       Set NR  ReadBW(KB/s)   MaxClat(us)    WriteBW(KB/s)  MaxClat(us)    
---       --- --  ------------   -----------    -------------  -----------    
bsr       1   1   174919         25672          0              0              
bsr       1   2   45996          126272         0              0              
bsr       1   4   46002          621047         0              0              
bsr       1   8   48712          823121         0              0              
bsr       1   16  46985          2082693        0              0              


Version-Release number of selected component (if applicable):

2.6.18-191.el5

How reproducible:

Always

Steps to Reproduce:
1. Run iostest
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 Jeff Moyer 2010-07-22 21:03:57 UTC
iostest seems to rely on bashisms that don't exist in my RHEL 5 version of bash.  So, I just ran a fio job file for 1 2 4 and 8 sequential readers.  I don't see this performance issue on my storage, an IBM 2810XIV.  Maybe I got the fio job file syntax wrong:

[global]
size=4096m
directory=/mnt/test/
ioscheduler=deadline   # changed this between runs
invalidate=1
runtime=30
time_based
rw=read

[bsr1]
nrfiles=1

[bsr2]
stonewall
nrfiles=2

[bsr4]
stonewall
nrfiles=4

[bsr8]
stonewall
nrfiles=8

I tested with kernel 2.6.18-194.el5.

Comment 2 Jeff Moyer 2010-08-26 19:20:38 UTC
I've tried to reproduce this on my HP EVA, and I am unable to.  In fact, the 5.5 kernel seems to perform better than the 5.4 kernel.  I'm not sure what to do with this bug at this point.

Comment 3 Jeff Moyer 2011-02-07 16:38:39 UTC
Vivek, I'm closing this bug.  If you can reproduce it, feel free to re-open it.

Thanks!


Note You need to log in before you can comment on or make changes to this bug.