Bug 204881 - md RAID1 writes eat up 100% CPU, high wa%
Summary: md RAID1 writes eat up 100% CPU, high wa%
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 5
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Brian Brock
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2006-09-01 00:53 UTC by Trevor Cordes
Modified: 2007-11-30 22:11 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2007-04-12 14:47:09 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Trevor Cordes 2006-09-01 00:53:54 UTC
Description of problem:
Running tests with dd I can make (.5GB) big writes to RAID1 ext3 fs's eat up so
much CPU that interactive stuff gets real glitchy.  Top shows idle time going to
zero and %wa going to 100%!  It seems to me this behaviour is buggy as writes
should perhaps be throttled back to not consume all CPU.


Version-Release number of selected component (if applicable):
2.6.17-1.2139_FC5smp
2.6.12-2.3.legacy_FC3smp

How reproducible:
always

Steps to Reproduce:
1. make FC system with md (software) RAID1 ext3 fs
2. dd if=/dev/zero of=/raid1fs/z bs=1M count=512
3. watch top
  
Actual results:
Top shows %wa going up from near zero to near 100% for 10 seconds.  Idle goes
down to 0%.  System starts getting glitchy, file serving streams (MP3, MythTV)
get glitchy until write is done and idle goes > 0%.

Expected results:
Writes should not be so CPU-hogging.  No need to write them out as fast as
possible at the expense of interactive CPU time.

Additional info:
Tested on 1 FC5 box and 2 different FC3 boxes.  All exhibit this behaviour. 
Test systems used RAID1 for root + boot + swap.  Tested on root and boot.  All
systems were using onboard Intel ICH4 or 5 ATA100 ports.

Interestingly, if I run the same test on my 2TB RAID6 array, the CPU behaves
properly: %wa never gets very high and idle stays > 0%.  Very strange, since R6
requires way more CPU horsepower.

Test systems were all at least 2GHz P4, dual-channel DDR.  Main test system had
2GB RAM.

Comment 1 David Lawrence 2006-09-05 15:59:08 UTC
Changing to proper owner, kernel-maint.

Comment 2 Dave Jones 2006-10-16 21:21:27 UTC
A new kernel update has been released (Version: 2.6.18-1.2200.fc5)
based upon a new upstream kernel release.

Please retest against this new kernel, as a large number of patches
go into each upstream release, possibly including changes that
may address this problem.

This bug has been placed in NEEDINFO state.
Due to the large volume of inactive bugs in bugzilla, if this bug is
still in this state in two weeks time, it will be closed.

Should this bug still be relevant after this period, the reporter
can reopen the bug at any time. Any other users on the Cc: list
of this bug can request that the bug be reopened by adding a
comment to the bug.

In the last few updates, some users upgrading from FC4->FC5
have reported that installing a kernel update has left their
systems unbootable. If you have been affected by this problem
please check you only have one version of device-mapper & lvm2
installed.  See bug 207474 for further details.

If this bug is a problem preventing you from installing the
release this version is filed against, please see bug 169613.

If this bug has been fixed, but you are now experiencing a different
problem, please file a separate bug for the new problem.

Thank you.

Comment 3 Trevor Cordes 2006-11-08 11:17:38 UTC
Bug still present as of 2.6.18-1.2200.fc5.


Comment 4 Trevor Cordes 2006-11-08 11:18:39 UTC
Bug still present as of 2.6.18-1.2200.fc5.

Comment 5 Ade Rixon 2006-11-15 11:05:46 UTC
I didn't see this until I booted kernel-2.6.18-1.2239.fc5, at which point the
system started a resync of my 40GB RAID1 partition that eventually ground to a
halt somewhere around 50%, locking everything up. Reverting to 2200, the problem
didn't recur.

Comment 6 Dave Jones 2006-11-21 00:13:34 UTC
can you try just a dd to a single disk, (ie, don't start up the raid set).
and see if the same effect occurs ?

If it does, it points at a problem with your IO controller. If it goes away,
it's definitely an md problem.

After this, also try two dd's in parallel operating against each of the disks,
and see if that also induces the same effect.


Comment 7 Trevor Cordes 2006-11-22 18:02:25 UTC
OK, good idea.  I tried it to a single disk (I swapoffed the swap space and just
dd'd to that), and it does do the same behaviour.  So I guess I was barking up
the wrong tree a bit.  The problem isn't md, the problem is lower than that.  I
had been pretty sure it was md because my RAID6 array on the same
disks/controllers does not exhibit this problem, just the RAID1 arrays (and now
no-raid).

I don't really get it.  Why am I seeing semi-PIO behaviour on a relatively
modern system?  This happens (confirmed) on 3 other systems ranging from P100's
to Celeron 1.7's.

My main test system is P4 2.4 on a E7201 board (ICH5 I think).

#hdparm /dev/hdc5

/dev/hdc5:
 multcount    = 16 (on)
 IO_support   =  3 (32-bit w/sync)
 unmaskirq    =  1 (on)
 using_dma    =  1 (on)
 keepsettings =  0 (off)
 readonly     =  0 (off)
 readahead    = 256 (on)
 geometry     = 36483/255/63, sectors = 5124672, start = 580798008

As you can see, I've been sure to use DMA, unmasq-irq, etc.

My test system is weird in that it has 4 PCI I/O controller cards, but the other
test systems are just using standard onboard Intel southbridge.  All have DMA on.

So I suppose the bug should be revised, and perhaps it's not a bug?  But should
linux really go to near-100% wait to write out huge files at the expense of
interactive performance?


Comment 8 Dave Jones 2006-11-24 20:12:40 UTC
I suppose it's a latency/throughput tradeoff.  You may be able to tune it with
the tunables in /proc/sys/vm/ so that it doesn't write stuff out to disk so
regularly, but I'm not even sure that's going to be the magic bullet you seek.


Comment 9 Trevor Cordes 2007-04-12 09:24:06 UTC
I'd close this bug.  It's really a NOTABUG, and brain-deadedness / invalid
expectations on my part.  Sorry.


Comment 10 Trevor Cordes 2007-11-25 12:38:07 UTC
Anyone seeing this bug, check out kernel bug:

http://bugzilla.kernel.org/show_bug.cgi?id=7372

And try changing all your /sys/block/sda/queue/nr_requests (replace sda with all
your hard drives) to 16.  Default is 128.  128 definitely causes starvation for
my issue.  16 caused symptoms to completely disappear on lightly loaded server.
 Still a bit glitchy when heavily loaded, but much more bearable.  See my
comments in the kernel bug above.



Note You need to log in before you can comment on or make changes to this bug.