Bug 240077 - Panic under high disk I/O (stack overflow: XFS + LVM)
Summary: Panic under high disk I/O (stack overflow: XFS + LVM)
Keywords:
Status: CLOSED DUPLICATE of bug 227331
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 6
Hardware: i686
OS: Linux
medium
high
Target Milestone: ---
Assignee: Eric Sandeen
QA Contact: Brian Brock
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2007-05-14 22:16 UTC by Nathan Valentine
Modified: 2007-11-30 22:12 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2007-08-09 19:42:38 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
loghost capture of kern (5.53 MB, text/plain)
2007-05-14 22:16 UTC, Nathan Valentine
no flags Details

Description Nathan Valentine 2007-05-14 22:16:52 UTC
Description of problem:

We have a MySQL database server that hosts several very large and very active
(read-intensive) databases on an XFS LVM volume running on top of a 3Ware
mirrored RAID. Several times a week, the machine kernel panics under high disk
I/O. The following stack trace is from a loghost but we are unable to get the
oops message from the console as the crashes put the machine into a state where
it does not accept console input. We are working on putting a serial console on
the machine but we have the following information available now:


# dmesg | grep -i 3ware
3ware 9000 Storage Controller device driver for Linux v2.26.02.008.
scsi0 : 3ware 9000 Storage Controller
3w-9xxx: scsi0: Found a 3ware 9000 Storage Controller at 0xda300000, IRQ: 16.

# df -h | grep mysql
/dev/mapper/system-mysql
                     1000G  431G  570G  44% /var/lib/mysql

# vgdisplay -v 
    Finding all volume groups
    Finding volume group "system"
  --- Volume group ---
  VG Name               system
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  11
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               2.63 TB
  PE Size               4.00 MB
  Total PE              689640
  Alloc PE / Size       256000 / 1000.00 GB
  Free  PE / Size       433640 / 1.65 TB
  VG UUID               GgcbZ2-ex1S-mAIJ-D9xl-oH9m-DaPA-Ms9l09
   
  --- Logical volume ---
  LV Name                /dev/system/mysql
  VG Name                system
  LV UUID                wOtwiF-Qfde-AF2F-Sk4V-7Ufz-CzuO-qNeYMM
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1000.00 GB
  Current LE             256000
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:0
   
  --- Physical volumes ---
  PV Name               /dev/sdb1     
  PV UUID               ZUdR21-DBuU-vzF3-RG9l-Xypc-J2jX-Gbwftv
  PV Status             allocatable
  Total PE / Free PE    689640 / 433640

# uname -a 
Linux <somehost> 2.6.20-1.2925.fc6 #1 SMP Sat Mar 10 19:15:16 EST 2007 i686 i686
i386 GNU/Linux


Version-Release number of selected component (if applicable):

2.6.20-1.2925.fc6


How reproducible:

At least twice a week under "normal" high load.


Additional Info:

Kernel stack trace attached.

Comment 1 Nathan Valentine 2007-05-14 22:16:54 UTC
Created attachment 154693 [details]
loghost capture of kern

Comment 2 Chuck Ebbert 2007-05-14 22:26:03 UTC
xfs on lvm volumes will do that.

Kernels from http://www.linuxant.com/driverloader/wlan/full/downloads.php
may work but the 16K stacks can cause other problems (out of memory
when trying to start a new task.) Setting /proc/sys/vm/min_free_kbytes
to 16000 can help prevent that but it's not guaranteed.



Comment 3 Nathan Valentine 2007-05-15 16:51:00 UTC
Is ext3 on LVM known to exhibit this behavior as well?

Comment 4 Nathan Valentine 2007-05-15 17:14:20 UTC
Actually, I guess a better question would be "Where in the Fedora community
should I have found documentation about this and similar issues?" 



Comment 5 Chuck Ebbert 2007-05-16 17:30:18 UTC
(In reply to comment #3)
> Is ext3 on LVM known to exhibit this behavior as well?

ext3 should be fine. Raw devices probably would work best though.

(Stack overflows with lvm + xfs are well-known in the Linux kernel community.
Probably not known about enough in the user groups though.)

Comment 6 Eric Sandeen 2007-06-27 16:26:26 UTC
I'll put this under my name - if nothing else I may dup it to another bug that
might get WONTFIXed eventually, unfortunately.  Stacked IO + XFS + 4k stacks is
tough; lots of stack reductions have been done in xfs but it's unlikely that
this is ever going to be 100% robust.

16K stacks are probably overkill; default 8K stacks (kernel config option on
x86) or 8k stacks on x86_64 will probably work fine.

As we get more stacked filesystems in the kernel (think unionfs, ecryptfs) the
4k stacks may get more interesting too.

Comment 7 Nathan Valentine 2007-06-27 16:42:23 UTC
FWIW, I was able to reproduce this panic on both 8k and 16k stacks. 

We eventually "solved" the problem by one of two methods depending on the role
of the server:

1) Swap XFS for ext3. 
2) Move XFS filesystem from LVM to raw partitions.

Since we made these changes, things have been stable and performant. 

But I agree with the assessment that it is likely that stacked storage
management is not going away and thus this will continue to be a problem.

Thanks for your help.

Comment 8 Eric Sandeen 2007-06-27 17:13:06 UTC
On 16k stacks?  Yikes... ok, that's unexpected.

Do you happen to have any stack traces from that kernel?  I wonder if you're
hitting recursion....  I'll look more closely at the kernel log you have posted
already.  I'm a bit skeptical of it, it seems to show *hundreds* of functions on
the stack...?  Even with the false positives from dump_stack() it doesn't seem
quite right.

Comment 9 Nathan Valentine 2007-06-27 17:52:26 UTC
Unfortunately, I didn't save any of the debugging information from testing
alternative stack sizes. 



Comment 10 Eric Sandeen 2007-08-09 19:42:38 UTC
Duping this to an earlier 4k+xfs+lvm bug, though the root cause may be slightly
different it's the same issue as far as I can tell - and one without a good 
solution I'm afraid.

*** This bug has been marked as a duplicate of 227331 ***


Note You need to log in before you can comment on or make changes to this bug.