Bug 123440 - syslog flooded with 'raid5: switching cache buffer size' messages
syslog flooded with 'raid5: switching cache buffer size' messages
Status: CLOSED CANTFIX
Product: Red Hat Enterprise Linux 3
Classification: Red Hat
Component: kernel (Show other bugs)
3.0
i686 Linux
medium Severity high
: ---
: ---
Assigned To: Heinz Mauelshagen
Brian Brock
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2004-05-18 12:05 EDT by Sean E. Millichamp
Modified: 2007-11-30 17:07 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-09-12 11:38:38 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Sean E. Millichamp 2004-05-18 12:05:23 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.4.1)
Gecko/20031114

Description of problem:
I have a server with 4 drives in a software RAID-5 array.  LVM is
managing this md device.  All of my filesystems are ext3 created with
"mke2fs -j /dev/xxx/xxx"

I had a LVM snapshot of one of the volumes in place when I started a
large filecopy to the volume.  Shortly after system performance ground
to a halt and many processes started to bottleneck on I/O.

During this time a huge number of lines were sent to syslog and
appeared in /var/log/messages:

May 18 11:27:14 xmco2 kernel: raid5: switching cache buffer size, 4096
--> 512
May 18 11:27:14 xmco2 kernel: raid5: switching cache buffer size, 0
--> 4096
May 18 11:27:14 xmco2 last message repeated 10 times
May 18 11:27:14 xmco2 kernel: raid5: switching cache buffer size, 0
--> 512
May 18 11:27:14 xmco2 kernel: raid5: switching cache buffer size, 512
--> 4096
May 18 11:27:14 xmco2 kernel: raid5: switching cache buffer size, 0
--> 4096
May 18 11:27:14 xmco2 last message repeated 29 times
May 18 11:27:14 xmco2 kernel: raid5: switching cache buffer size, 4096
--> 512
May 18 11:27:14 xmco2 kernel: raid5: switching cache buffer size, 512
--> 4096
May 18 11:27:14 xmco2 kernel: raid5: switching cache buffer size, 4096
--> 512
May 18 11:27:14 xmco2 kernel: raid5: switching cache buffer size, 0
--> 512

In a 10 minute or so period > 100,000 of these style lines appeared. 
And that is not counting the ones that would have appeared but for the
"last message repeated X times" lines.

I don't know if the messages were the cause of the disk bottleneck or
a symptom of the problem.  I suspect they are a symptom of some
serious problem with the combination of raid5, lvm, and snapshots.
When I deleted all the snapshots performance quickly increased back to
normal levels.


Version-Release number of selected component (if applicable):


How reproducible:
Didn't try

Steps to Reproduce:
I can't try to reproduce this problem on this machine because it was
serious trouble when it occurred the first time.  I suspect that this
is entirely reproduceable though.


Additional info:

I could not find any mention in the RHEL 3 documentation that says I
should not expect md RAID-5 + LVM + LVM snapshots to cause crippling
performance problems and/or should not be attempted.
Comment 1 Sean E. Millichamp 2004-05-19 08:47:49 EDT
I created a snapshot last night on the volume and a flood of the
messages began appearing again.  So something with the LVM snapshots
definitely seems to be triggering a problem with raid5.

I realized I didn't state the kernel version before, but I'm current
with up2date. so: kernel-smp-2.4.21-15.EL i686
Comment 2 Sean E. Millichamp 2004-05-19 10:34:29 EDT
I just got a kernel panic that caused the volume to become unusable:
May 19 10:31:28 xmco2 kernel: Unable to handle kernel NULL pointer
dereference at virtual address 00000008
May 19 10:31:28 xmco2 kernel:  printing eip:
May 19 10:31:28 xmco2 kernel: f886e180
May 19 10:31:28 xmco2 kernel: *pde = 04581001
May 19 10:31:28 xmco2 kernel: *pte = 00000000
May 19 10:31:28 xmco2 kernel: Oops: 0000
May 19 10:31:28 xmco2 kernel: parport_pc lp parport autofs e1000
ip_conntrack_ftp iptable_mangle ipt_LOG ipt_state iptable_filter
iptable_nat ip_conntrack ip_tables floppy sg microcode loo
May 19 10:31:28 xmco2 kernel: CPU:    1
May 19 10:31:28 xmco2 kernel: EIP:    0060:[<f886e180>]    Not tainted
May 19 10:31:28 xmco2 kernel: EFLAGS: 00010213
May 19 10:31:28 xmco2 kernel:
May 19 10:31:28 xmco2 kernel: EIP is at lvm_find_exception_table
[lvm-mod] 0x50 (2.4.21-15.ELsmp/i686)
May 19 10:31:28 xmco2 kernel: eax: 00000000   ebx: f90d9038   ecx:
00000000   edx: 0001ffff
May 19 10:31:28 xmco2 kernel: esi: 1e948280   edi: 00000018   ebp:
00000902   esp: f7f33e20
May 19 10:31:28 xmco2 kernel: ds: 0068   es: 0068   ss: 0068
May 19 10:31:28 xmco2 kernel: Process kupdated (pid: 10,
stackpage=f7f33000)
May 19 10:31:28 xmco2 kernel: Stack: 0001ffff c04efc70 f35cf000
00000000 00000000 f7f33ea4 f886d1fe 00000902
May 19 10:31:28 xmco2 kernel:        1e948280 f35cf000 1e948280
00000001 f35cf170 f35cf000 f7f05200 00000001
May 19 10:31:28 xmco2 kernel:        f88695f8 f7f33ea4 f7f33e9c
1e948280 f35cf000 f7fad000 f7f32000 00000000
May 19 10:31:28 xmco2 kernel: Call Trace:   [<f886d1fe>]
lvm_snapshot_remap_block [lvm-mod] 0x7e (0xf7f33e38)
May 19 10:31:28 xmco2 kernel: [<f88695f8>] lvm_map [lvm-mod] 0x1a8
(0xf7f33e60)
May 19 10:31:28 xmco2 kernel: [<c0133d1b>]
update_process_times_statistical [kernel] 0x7b (0xf7f33e84)
May 19 10:31:28 xmco2 kernel: [<f8869a57>] lvm_make_request_fn
[lvm-mod] 0x17 (0xf7f33eb8)
May 19 10:31:28 xmco2 kernel: [<c01cd22a>] generic_make_request
[kernel] 0xea (0xf7f33ec4)
May 19 10:31:28 xmco2 kernel: [<c01cd2c9>] submit_bh_rsector [kernel]
0x49 (0xf7f33eec)
May 19 10:31:28 xmco2 kernel: [<c0161dfb>] write_locked_buffers
[kernel] 0x3b (0xf7f33f08)
May 19 10:31:28 xmco2 kernel: [<c0161f70>] write_some_buffers [kernel]
0x160 (0xf7f33f1c)
May 19 10:31:28 xmco2 kernel: [<c016611d>] sync_old_buffers [kernel]
0x7d (0xf7f33fc0)
May 19 10:31:28 xmco2 kernel: [<c01664c6>] kupdate [kernel] 0xe6
(0xf7f33fd4)
May 19 10:31:28 xmco2 kernel: [<c01663e0>] kupdate [kernel] 0x0
(0xf7f33fe4)
May 19 10:31:28 xmco2 kernel: [<c010958d>] kernel_thread_helper
[kernel] 0x5 (0xf7f33ff0)
May 19 10:31:28 xmco2 kernel:
May 19 10:31:28 xmco2 kernel: Code: 39 71 08 74 0f 8b 09 47 39 d9 75
f4 83 c4 08 5b 5e 5f 5d c3
May 19 10:31:28 xmco2 kernel:
May 19 10:31:28 xmco2 kernel: Kernel panic: Fatal exception

It appeared within a number of the previously stated messages.
Comment 3 Heinz Mauelshagen 2004-05-26 09:24:04 EDT
The overhead with switching MD RAID5 cache buffers is a known issue we 
can't avoid in that combination because of the restriction to map 512 
byte sectors in the lvm driver
Comment 4 Sean E. Millichamp 2004-05-26 09:54:54 EDT
Okay... if the overhead is just a fact of life I suppose I can deal
with that if I have to.  Any idea if some future fix/workaround is
planned that I might see in (say) RHEL4+, or is that just a permanent
fact of life?

It would be nice if that restriction/warning had been mentioned either
in the installer or the RHEL documentation.  The installer lets you
build LVM on MD devices and there is no warning I saw in the docs or
installer saying that bad things could happen with LVM on MD RAID-5.

But any idea about the kernel oops?  It seems related and that is way
more serious then a performance hit.
Comment 5 Heinz Mauelshagen 2004-05-26 10:02:19 EDT
Fix will be in RHEL4+ with device-mapper/lvm2 replacing lvm1.
Wasn't able to reproduce the oops here yet.
Comment 6 Jason Smith 2005-10-31 12:13:22 EST
Does comment #5 mean that there will be no fix for this constant cache buffer
switching in RHEL3 then?  I am seeing this on a server and it doesn't seem to
have been caused by LVM snapshots, I believe that it was triggered only by some
heavy IO load.
Comment 7 Heinz Mauelshagen 2007-09-12 11:38:38 EDT
Closing, because this is not reproducable.

Note You need to log in before you can comment on or make changes to this bug.