Description of problem: I was running some sequential workloads with CFQ, and I see a regression in 5.5 as compared to 5.4 I ran iostest with 5.4 and 5.5 kernels on machine storageqe02.rhts.eng.bos.redhat.com. Host=storageqe-02.rhts.eng.bos.redhat.com Kernel=2.6.18-164.el5 DIR=/mnt/iostestmnt/fio DEV=/dev/cciss/c1d0p1 Workload=bsr iosched=cfq Filesz=1G bs=8K ========================================================================= job Set NR ReadBW(KB/s) MaxClat(us) WriteBW(KB/s) MaxClat(us) --- --- -- ------------ ----------- ------------- ----------- bsr 1 1 230014 52873 0 0 bsr 1 2 173762 115622 0 0 bsr 1 4 175933 323484 0 0 bsr 1 8 171647 1215375 0 0 bsr 1 16 168934 2859577 0 0 Host=storageqe-02.rhts.eng.bos.redhat.com Kernel=2.6.18-191.el5 DIR=/mnt/iostestmnt/fio DEV=/dev/cciss/c1d0p1 Workload=bsr iosched=cfq Filesz=1G bs=8K ========================================================================= job Set NR ReadBW(KB/s) MaxClat(us) WriteBW(KB/s) MaxClat(us) --- --- -- ------------ ----------- ------------- ----------- bsr 1 1 174919 25672 0 0 bsr 1 2 45996 126272 0 0 bsr 1 4 46002 621047 0 0 bsr 1 8 48712 823121 0 0 bsr 1 16 46985 2082693 0 0 Version-Release number of selected component (if applicable): 2.6.18-191.el5 How reproducible: Always Steps to Reproduce: 1. Run iostest 2. 3. Actual results: Expected results: Additional info:
iostest seems to rely on bashisms that don't exist in my RHEL 5 version of bash. So, I just ran a fio job file for 1 2 4 and 8 sequential readers. I don't see this performance issue on my storage, an IBM 2810XIV. Maybe I got the fio job file syntax wrong: [global] size=4096m directory=/mnt/test/ ioscheduler=deadline # changed this between runs invalidate=1 runtime=30 time_based rw=read [bsr1] nrfiles=1 [bsr2] stonewall nrfiles=2 [bsr4] stonewall nrfiles=4 [bsr8] stonewall nrfiles=8 I tested with kernel 2.6.18-194.el5.
I've tried to reproduce this on my HP EVA, and I am unable to. In fact, the 5.5 kernel seems to perform better than the 5.4 kernel. I'm not sure what to do with this bug at this point.
Vivek, I'm closing this bug. If you can reproduce it, feel free to re-open it. Thanks!