Bug 677005 - CFQ IO SCHED bad performance
Summary: CFQ IO SCHED bad performance
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel
Version: 5.5
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Red Hat Kernel Manager
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-02-12 14:12 UTC by Yuri Arabadji
Modified: 2014-10-27 11:09 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-06-02 13:09:27 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Yuri Arabadji 2011-02-12 14:12:47 UTC
I've got a server with 3ware RAID card and a recent RHEL kernel on it. I've tried to update RAID firmware, but getting same slowness.  

Here's how you reproduce it:
1. open 2 shells
2.  dd reading from device in first shell
3.  do some I/O in second shell.
4. change scheduler to "deadline". Repeat 1-3, notice the huge difference.

The kernel version is: 2.6.18-238.1.1.el5

Here are the results of tests:

=========================================================
freshly booted machine

<<launch dd in 1st shell>>
[root@r1d1-28 ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]
[root@r1d1-28 ~]# time yum search sadfaserZ >/dev/null
Warning: No matches found for: sadfaserZ

real    0m24.948s
user    0m0.344s
sys     0m0.121s
<<stop dd in 1st shell>>
[root@r1d1-28 ~]# echo 0 > /sys/block/sda/queue/iosched/slice_idle
[root@r1d1-28 ~]# echo 1 | tee /proc/sys/vm/drop_caches
1
<<launch dd in 1st shell>>
[root@r1d1-28 ~]# time yum search sadfaserZ >/dev/null
Warning: No matches found for: sadfaserZ

real    0m3.178s
user    0m0.368s
sys     0m0.136s
<<stop dd in 1st shell>>
[root@r1d1-28 ~]# echo deadline  >  /sys/block/sda/queue/scheduler 
[root@r1d1-28 ~]# echo 1 | tee /proc/sys/vm/drop_caches
1
<<launch dd in 1st shell>>
[root@r1d1-28 ~]# time yum search sadfaserZ >/dev/null
Warning: No matches found for: sadfaserZ

real    0m2.936s
user    0m0.348s
sys     0m0.131s
=========================================================

I used "yum" because it does db-like i/o, but it doesn't matter - it could be anything requesting disk access. What would you recommend in this situation? I thought CFQ was fixed previous year in -192.

Thanks!

Comment 1 Ric Wheeler 2011-02-12 16:16:31 UTC
Hi Yuri,

Can you please open a ticket through Red Hat support if you have a subscription?

Red Hat bugzilla is not meant as the first line of customer support, thanks!

Comment 2 Yuri Arabadji 2011-02-12 17:09:20 UTC
I can't. I don't have a subscription. The same is observed with latest centos 5.5 kernel. The problem seems to be limited to this particular server, although I've tested only other two. Will get back once I have more info. But again, it happens only with CFQ, see my "test output".

Comment 3 Ric Wheeler 2011-02-12 17:59:09 UTC
Hi Yuri,

If you don't have a subscription, the best way to get help is to post things to the upstream mailing list (for CFQ issues, Jens Axboe is the upstream lead and the right list is probably linux-kernel.org).

Thanks!

Comment 4 RHEL Program Management 2014-03-07 12:45:58 UTC
This bug/component is not included in scope for RHEL-5.11.0 which is the last RHEL5 minor release. This Bugzilla will soon be CLOSED as WONTFIX (at the end of RHEL5.11 development phase (Apr 22, 2014)). Please contact your account manager or support representative in case you need to escalate this bug.

Comment 5 RHEL Program Management 2014-06-02 13:09:27 UTC
Thank you for submitting this request for inclusion in Red Hat Enterprise Linux 5. We've carefully evaluated the request, but are unable to include it in RHEL5 stream. If the issue is critical for your business, please provide additional business justification through the appropriate support channels (https://access.redhat.com/site/support).


Note You need to log in before you can comment on or make changes to this bug.