Bug 204881 - md RAID1 writes eat up 100% CPU, high wa%
md RAID1 writes eat up 100% CPU, high wa%
Product: Fedora
Classification: Fedora
Component: kernel (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Kernel Maintainer List
Brian Brock
Depends On:
  Show dependency treegraph
Reported: 2006-08-31 20:53 EDT by Trevor Cordes
Modified: 2007-11-30 17:11 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2007-04-12 10:47:09 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Trevor Cordes 2006-08-31 20:53:54 EDT
Description of problem:
Running tests with dd I can make (.5GB) big writes to RAID1 ext3 fs's eat up so
much CPU that interactive stuff gets real glitchy.  Top shows idle time going to
zero and %wa going to 100%!  It seems to me this behaviour is buggy as writes
should perhaps be throttled back to not consume all CPU.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. make FC system with md (software) RAID1 ext3 fs
2. dd if=/dev/zero of=/raid1fs/z bs=1M count=512
3. watch top
Actual results:
Top shows %wa going up from near zero to near 100% for 10 seconds.  Idle goes
down to 0%.  System starts getting glitchy, file serving streams (MP3, MythTV)
get glitchy until write is done and idle goes > 0%.

Expected results:
Writes should not be so CPU-hogging.  No need to write them out as fast as
possible at the expense of interactive CPU time.

Additional info:
Tested on 1 FC5 box and 2 different FC3 boxes.  All exhibit this behaviour. 
Test systems used RAID1 for root + boot + swap.  Tested on root and boot.  All
systems were using onboard Intel ICH4 or 5 ATA100 ports.

Interestingly, if I run the same test on my 2TB RAID6 array, the CPU behaves
properly: %wa never gets very high and idle stays > 0%.  Very strange, since R6
requires way more CPU horsepower.

Test systems were all at least 2GHz P4, dual-channel DDR.  Main test system had
Comment 1 David Lawrence 2006-09-05 11:59:08 EDT
Changing to proper owner, kernel-maint.
Comment 2 Dave Jones 2006-10-16 17:21:27 EDT
A new kernel update has been released (Version: 2.6.18-1.2200.fc5)
based upon a new upstream kernel release.

Please retest against this new kernel, as a large number of patches
go into each upstream release, possibly including changes that
may address this problem.

This bug has been placed in NEEDINFO state.
Due to the large volume of inactive bugs in bugzilla, if this bug is
still in this state in two weeks time, it will be closed.

Should this bug still be relevant after this period, the reporter
can reopen the bug at any time. Any other users on the Cc: list
of this bug can request that the bug be reopened by adding a
comment to the bug.

In the last few updates, some users upgrading from FC4->FC5
have reported that installing a kernel update has left their
systems unbootable. If you have been affected by this problem
please check you only have one version of device-mapper & lvm2
installed.  See bug 207474 for further details.

If this bug is a problem preventing you from installing the
release this version is filed against, please see bug 169613.

If this bug has been fixed, but you are now experiencing a different
problem, please file a separate bug for the new problem.

Thank you.
Comment 3 Trevor Cordes 2006-11-08 06:17:38 EST
Bug still present as of 2.6.18-1.2200.fc5.
Comment 4 Trevor Cordes 2006-11-08 06:18:39 EST
Bug still present as of 2.6.18-1.2200.fc5.
Comment 5 Ade Rixon 2006-11-15 06:05:46 EST
I didn't see this until I booted kernel-2.6.18-1.2239.fc5, at which point the
system started a resync of my 40GB RAID1 partition that eventually ground to a
halt somewhere around 50%, locking everything up. Reverting to 2200, the problem
didn't recur.
Comment 6 Dave Jones 2006-11-20 19:13:34 EST
can you try just a dd to a single disk, (ie, don't start up the raid set).
and see if the same effect occurs ?

If it does, it points at a problem with your IO controller. If it goes away,
it's definitely an md problem.

After this, also try two dd's in parallel operating against each of the disks,
and see if that also induces the same effect.
Comment 7 Trevor Cordes 2006-11-22 13:02:25 EST
OK, good idea.  I tried it to a single disk (I swapoffed the swap space and just
dd'd to that), and it does do the same behaviour.  So I guess I was barking up
the wrong tree a bit.  The problem isn't md, the problem is lower than that.  I
had been pretty sure it was md because my RAID6 array on the same
disks/controllers does not exhibit this problem, just the RAID1 arrays (and now

I don't really get it.  Why am I seeing semi-PIO behaviour on a relatively
modern system?  This happens (confirmed) on 3 other systems ranging from P100's
to Celeron 1.7's.

My main test system is P4 2.4 on a E7201 board (ICH5 I think).

#hdparm /dev/hdc5

 multcount    = 16 (on)
 IO_support   =  3 (32-bit w/sync)
 unmaskirq    =  1 (on)
 using_dma    =  1 (on)
 keepsettings =  0 (off)
 readonly     =  0 (off)
 readahead    = 256 (on)
 geometry     = 36483/255/63, sectors = 5124672, start = 580798008

As you can see, I've been sure to use DMA, unmasq-irq, etc.

My test system is weird in that it has 4 PCI I/O controller cards, but the other
test systems are just using standard onboard Intel southbridge.  All have DMA on.

So I suppose the bug should be revised, and perhaps it's not a bug?  But should
linux really go to near-100% wait to write out huge files at the expense of
interactive performance?
Comment 8 Dave Jones 2006-11-24 15:12:40 EST
I suppose it's a latency/throughput tradeoff.  You may be able to tune it with
the tunables in /proc/sys/vm/ so that it doesn't write stuff out to disk so
regularly, but I'm not even sure that's going to be the magic bullet you seek.
Comment 9 Trevor Cordes 2007-04-12 05:24:06 EDT
I'd close this bug.  It's really a NOTABUG, and brain-deadedness / invalid
expectations on my part.  Sorry.
Comment 10 Trevor Cordes 2007-11-25 07:38:07 EST
Anyone seeing this bug, check out kernel bug:


And try changing all your /sys/block/sda/queue/nr_requests (replace sda with all
your hard drives) to 16.  Default is 128.  128 definitely causes starvation for
my issue.  16 caused symptoms to completely disappear on lightly loaded server.
 Still a bit glitchy when heavily loaded, but much more bearable.  See my
comments in the kernel bug above.

Note You need to log in before you can comment on or make changes to this bug.