Bug 206777 - Kernel BUG at lib/list_debug.c:26 called from ext3_orphan_add during unlink
Summary: Kernel BUG at lib/list_debug.c:26 called from ext3_orphan_add during unlink
Status: CLOSED DUPLICATE of bug 219650
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel (Show other bugs)
(Show other bugs)
Version: 5.0
Hardware: All Linux
Target Milestone: ---
: ---
Assignee: Eric Sandeen
QA Contact: Brian Brock
Keywords: Reopened
Depends On:
Blocks: 216989
TreeView+ depends on / blocked
Reported: 2006-09-16 15:24 UTC by Pti-seb
Modified: 2009-06-16 03:55 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2007-05-04 23:34:53 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
Error message : part 1 (411.76 KB, image/jpeg)
2006-09-16 15:24 UTC, Pti-seb
no flags Details
Error message : part 2 (448.62 KB, image/jpeg)
2006-09-16 15:25 UTC, Pti-seb
no flags Details
Error message : part 3 (411.64 KB, image/jpeg)
2006-09-16 15:26 UTC, Pti-seb
no flags Details
Error message : part 4 (396.36 KB, image/jpeg)
2006-09-16 15:28 UTC, Pti-seb
no flags Details
Error message : part 5 (441.87 KB, image/jpeg)
2006-09-16 15:29 UTC, Pti-seb
no flags Details
Error message : part 6 (417.71 KB, image/jpeg)
2006-09-16 15:30 UTC, Pti-seb
no flags Details
screenshot of kernel BUG (506.49 KB, image/png)
2006-10-22 13:28 UTC, Christian Nolte
no flags Details
Kernel panic when trying to add 256 disks (16.02 KB, application/octet-stream)
2007-04-17 18:15 UTC, Mike Miller (OS Dev)
no flags Details

Description Pti-seb 2006-09-16 15:24:00 UTC
Description of problem:
Kernel BUG at lib/list_debug.c:26

Steps to Reproduce:
1. boot on Fedora Core 6 Test 3

More information in atachement.

Comment 1 Pti-seb 2006-09-16 15:24:00 UTC
Created attachment 136429 [details]
Error message : part 1

Comment 2 Pti-seb 2006-09-16 15:25:33 UTC
Created attachment 136430 [details]
Error message : part 2

Comment 3 Pti-seb 2006-09-16 15:26:53 UTC
Created attachment 136431 [details]
Error message : part 3

Comment 4 Pti-seb 2006-09-16 15:28:09 UTC
Created attachment 136432 [details]
Error message : part 4

Comment 5 Pti-seb 2006-09-16 15:29:01 UTC
Created attachment 136433 [details]
Error message : part 5

Comment 6 Pti-seb 2006-09-16 15:30:11 UTC
Created attachment 136434 [details]
Error message : part 6

Comment 7 Dave Jones 2006-09-17 01:20:44 UTC
Does this box survive running memtest86+ for a day ?

Comment 8 Pti-seb 2006-09-18 21:06:32 UTC
Sorry, but I have reinstalled my Fc6test3 since this bug and she work perfectly now.

Comment 9 Pti-seb 2006-09-19 21:04:07 UTC
Sorry but i have reinstalled my fc6test3 since and she work perfectly now.

Comment 10 Jeff Moyer 2006-10-02 19:53:41 UTC
Johnathan is experiencing this very issue with 3 different kernels.  The one I
witnessed is kernel-2.6.18-2708.fc6 on his thinkpad.  rc.sysinit calls rm which
triggers this bug.  As such, he can't even boot the laptop.  The stack trace is

EIP is at __list_add


Comment 11 Dave Jones 2006-10-02 22:37:45 UTC
Eric, more scary Ext3 memory corruption.

jrb, can you try and get a digicam pic of the panic you see?
Newer kernels will dump the value of the pointer that's corrupted, which might
give a clue.

Comment 12 Eric Sandeen 2006-10-02 23:06:44 UTC
For the folks who have reported this, do you know if your box crashed just
before the problematic boot?  i.e. did the filesystem's journal need to be
replayed on the next (problematic, oopsing) boot?

Comment 13 Jonathan Blandford 2006-10-03 14:28:03 UTC
My box ran out of battery just before the problematic boot.  I can get a picture
of the panic -- any part in particular?

Comment 14 Eric Sandeen 2006-10-03 15:17:02 UTC
jrb, the part that says "list_add corruption. next->prev should be %p, but was
%p" would be interesting, but as much as you can capture...

How big is the root filesystem?  Getting an image of it might be useful for
testing, but that might be tricky.

Comment 15 Jonathan Blandford 2006-10-03 15:46:17 UTC
Eric, I can make the filesystem available to you, but it's a couple gigs.  I'm
in the RH office.  If anyone wants access to the drive, I can provide it. 

Comment 16 Christian Nolte 2006-10-22 13:25:25 UTC
I am experiencing the same problem with the following kernels:


Comment 17 Christian Nolte 2006-10-22 13:28:48 UTC
Created attachment 139079 [details]
screenshot of kernel BUG

Screenshot which shows the stacktrace of the BUG just after the volume group
device cannot be accessed

Comment 18 Eric Sandeen 2006-10-22 15:55:20 UTC
Christian, Sebastien:

1) was your problem also on a laptop?
2) was the timing of this in any way related to suspend actions?
2a) or was this problem encountered after an unclean shutdown? (poweroff)
3) was this right after an install, or had you had some successful boots prior
to this problem?


Comment 19 Christian Nolte 2006-10-22 18:49:38 UTC
1) It's a desktop PC.
2) I am not using suspend.
2a) No unclean shutdown.
3) No fresh installation here, I have successful boots with an other kernel. At
the moment I am using 2.6.18-1.2724.fc5.

Please tell me if you need further information.

Comment 20 Ted Kaczmarek 2006-10-24 17:19:34 UTC
Two more of these.

kernel BUG at lib/list_debug.c:65
invalid opcode: 0000 [#1]
CPU:    1
EIP is at list_del+0x23/0x6c

kernel BUG at lib/list_debug.c
invalid opcode: 0000 [#1]
CPU:    0
EIP is at list_del+0x48/0x6c
eax: 00000048   ebx: f1ebc9c0   ecx: c06789d0   edx: 00000096
esi: f7ffd6c0   edi: f0184000   ebp: f7fffe80   esp: f7fefef8
ds: 007b   es: 007b   ss: 0068
Process events/0 (pid: 8, ti=f7fef000 task=c19005e0 task.ti=f7fef000)
Stack: c063d4a6 f1ebc9c0 e4808ce5 f1ebc9c0 c046a3e5 f7ffc3c0 00000005 f7ffdee0
00000001 f7ffdee0 00000005 f7ffdec0 00000000 c046a4e3 00000000 00000000
f7fffe80 f7ffd6e4 f7ffd6c0 f7fffe80 c18f5240 00000282 c046b919 00000000
Call Trace:

If you need more info let me know.

Comment 21 Eric Sandeen 2006-10-24 17:29:08 UTC
Ted, just wondering about the steps that led up to this in your case.  

Was this at boot time?
Had the machine shut down cleanly before?
Had the machine suspended?
Had there been successful prior boots, or was this the first boot after install?

This is starting to look like a real problem :(

Comment 22 Ted Kaczmarek 2006-10-24 17:49:14 UTC
I have seen this a few times but never on boot.
The UP for sure has shut down cleanly, the SMP I think it has crashed before it
ever got a clean reboot.
Neither machine was suspended.
Many succesfult boot on both machines. 

Comment 23 Christian Nolte 2006-10-25 09:26:27 UTC
I did a clean install (reformatted partition) of FC6 today on the same machine.
This problem does not occur with kernel 2.6.18-1.2798.fc6 and a fresh system.

Comment 24 Jared Smith 2006-11-02 23:11:24 UTC
FWIW, I'm seeing something similar on the FC6 kernel (PPC architecture) on a
Powerbook G4.  I'll try to get a digital picture of the screen when it happens,
and see if I can't help track this down.

(To be honest, I'm not sure if mine is related to ext3 or not -- but thought it
might be a useful data point nonetheless.)

Comment 25 Eric Sandeen 2007-01-12 18:50:18 UTC
Is anyone cc'd on this bug still seeing the problem?

Comment 26 Jared Smith 2007-01-12 19:34:02 UTC
I'm haven't seen it in the past couple of weeks on FC6 (completely up to date
with updates) on PPC arch.  Hopefully it's been eliminated!

Comment 27 Christian Nolte 2007-01-14 10:00:03 UTC
No problems here too since my upgrade to FC6. My current kernel is
2.6.18-1.2869.fc6 i686

Comment 28 Dave Jones 2007-03-19 22:07:41 UTC
been a quiet 2 months.  Lets close this out.

Comment 30 Eric Sandeen 2007-04-12 18:50:33 UTC
A core is great; a testcase is that much nicer, if they can narrow it down.

Comment 31 Eric Sandeen 2007-04-13 04:06:06 UTC
Ok, got the core, FWIW here are the messages since there was no log to go with
it and crash isn't showing it to me either...

<3>list_add corruption. next->prev should be f6d9c12c, but was d6359488
<0>------------[ cut here ]------------
<0>kernel BUG at lib/list_debug.c:26!
<0>invalid opcode: 0000 [#1]
<1>last sysfs file: /class/scsi_host/host3/proc_name
<4>Modules linked in: autofs4 hidp rfcomm l2cap bluetooth sunrpc ipv6 video sbs
i2c_ec i2c_core button battery asus_acpi ac parport_pc lp parport joydev sg
pcspkr serio_raw bnx2 dm_snapshot dm_zero dm_mirror dm_mod qla2300(U)
usb_storage qla2400(U) ext3 jbd ehci_hcd ohci_hcd uhci_hcd qla2xxx(U) sd_mod
scsi_mod qla2xxx_conf(U) intermodule(U)
<0>CPU:    0
<4>EIP:    0060:[<c04e02c0>]    Tainted: GF     VLI
<4>EFLAGS: 00010246   (2.6.18-8.1.1.el5 #1) 
<0>EIP is at __list_add+0x1c/0x52
<0>eax: 00000048   ebx: d6359488   ecx: c0629851   edx: de6abeec
<0>esi: f6d9c12c   edi: dc0ce674   ebp: f724ac34   esp: de6abee8
<0>ds: 007b   es: 007b   ss: 0068
<0>Process fsck (pid: 1770, ti=de6ab000 task=cfbb4000 task.ti=de6ab000)
<0>Stack: c0629851 f6d9c12c d6359488 dc0ce698 00000000 f6dbc89c f896d5a9 f7311400 
<0>       dc0ce600 da43d1a4 00000800 0000013c dc0ce698 00000000 e050ee48 f6dbc89c 
<0>       f896e2d1 cf6b4c9c db79d02c 00000000 e050ee48 f71c65e0 d17cb000 c0476a12 
<0>Call Trace:
<0> [<f896d5a9>] ext3_orphan_add+0x17b/0x1b6 [ext3]
<0> [<f896e2d1>] ext3_unlink+0x12b/0x177 [ext3]
<0> [<c0476a12>] vfs_unlink+0xa3/0xd9
<0> [<c047858f>] do_unlinkat+0x85/0x10e
<0> [<c0406eb0>] do_syscall_trace+0xab/0xb1
<0> [<c0403eff>] syscall_call+0x7/0xb
<0> =======================
<0>Code: c7 43 04 00 02 20 00 c7 03 00 01 10 00 5b c3 57 89 c7 56 89 d6 53 8b 41
04 89 cb 39 d0 74 17 50 52 68 51 98 62 c0 e8 a4 3e f4 ff <0f> 0b 1a 00 03 98 62
c0 83 c4 0c 8b 06 39 d8 74 17 50 53 68 8e 
<0>EIP: [<c04e02c0>] __list_add+0x1c/0x52 SS:ESP 0068:de6abee8

Comment 32 Eric Sandeen 2007-04-13 04:54:00 UTC
So, it appears that the very first inode pointed to from the s_orphan list head
does not point back to that list head - it has been re-init'd w/o first removing
it from the list.

Will dig more later...

Comment 33 Eric Sandeen 2007-04-13 14:44:55 UTC
I'm thinking this may be at least related to bug 196915, which was fixed for
rhel4, but embarassingly enough not pushed to rhel5.

Any details on what was testing when this went bang?  Any possibility of racing
links/unlinks on open files?

The patch for bug 196915 looks like this:

--- linux.orig/fs/ext3/namei.c
+++ linux/fs/ext3/namei.c
@@ -2180,6 +2180,12 @@ static int ext3_link (struct dentry * ol
 	if (inode->i_nlink >= EXT3_LINK_MAX)
 		return -EMLINK;
+	/*
+	 * Return -ENOENT if we've raced with unlink and i_nlink is 0.  Doing
+	 * otherwise has the potential to corrupt the orphan inode list.
+	 */
+	if (inode->i_nlink == 0)
+		return -ENOENT;

but I'm not -quite- convinced that it's exactly the same problem here;
especially with respect to the people who hit this on a fresh bootup.  The above
patch certainly does need to go into RHEL5, though, regardless of whether it
addresses this particular issue... I'll keep looking at this.  If this is
reproducible maybe we can test with this patch in place.

Comment 34 Marcus Barrow 2007-04-13 15:01:35 UTC
It has been reproduced several times, but not easily. The system is running an HP test called
"Hazard". It is running on a large system as 1000 processes doing I/O to 176 Luns on a SAN.
Some of the I/O is thru files and some is direct I/O. Races on unlink / open would seem
a likely goal of this testing, but I am not familiar specifically.

I will try to test for those races though with some scripts.

Comment 35 Eric Sandeen 2007-04-13 15:14:09 UTC
There is a testcase in bug 196915 I believe, which I bet you can hit on rhel5. 
I just don't know for sure if that is the same as what reporters of -this- bug
are hitting.

Comment 36 Eric Sandeen 2007-04-13 19:01:57 UTC
Another way to approach this if it can be reproduced is to reproduce it with
this (untested) patch (and not the previous one, which hopefully fixes it) in place:

--- linux-2.6.18-13.el5.orig/fs/ext3/ialloc.c
+++ linux-2.6.18-13.el5/fs/ext3/ialloc.c
@@ -114,6 +114,10 @@ void ext3_free_inode (handle_t *handle, 
                printk("ext3_free_inode: inode on nonexistent device\n");
+       if (!list_empty(&EXT3_I(inode)->i_orphan)) {
+               printk("ext3_free_inode: on orphan list\n");
+               BUG();
+       }
        sbi = EXT3_SB(sb);
        ino = inode->i_ino;

which will hopefully catch the case where an inode is freed while still on the
orphan inode list, and perhaps offer another clue as to how we got there.

Comment 37 Eric Sandeen 2007-04-13 20:26:46 UTC
Ahah, I am not so bad after all.  We do have bug 219650 for this same issue in
RHEL5, and the patch was sent already for inclusion in 5.1, at this point.  The
process -does- work, I just forgot that it did.  :)  If/when I convince myself
of it, I may dup this bug to that one.

Comment 38 Eric Sandeen 2007-04-13 21:24:19 UTC
I've put an i686 rpm up with the patch from comment #33 at:


Looking at the dump some more, though, I'm finding some other odd
inconsistencies which make me start to wonder if this is actually something else.

The ext3 inode, and the ext3 sb to which we are adding it, and the ext3 inode's
sb, don't seem to match....

Comment 39 Eric Sandeen 2007-04-13 22:26:18 UTC
Ok, I got lost in a twisty maze there.

This situation still makes sense; the superblock we wish to add the *new* orphan
inode to (which happens to be for file /9/4/1, on the root filesystem, looks
like) has an s_orphan->next pointing to an inode which has been re-used, and now
belongs to the /diskfs/ext3/did00000 filesystem, and that re-used inode has an
empty i_orphan list which of course does not point back to the root's superblock.

So, everything I can see is consistent with a corrupted s_orphan list on the
root filesystems's superblock; an inode that it once pointed to was freed and
re-used without removing it from the superblock's list, and when we try to add a
-new- inode to the root sb's orphan list, the list corruption is detected.

This would come about from racing links & unlinks on the root fs, it seems.

I look forward to reports of success from the kernels in comment #38 :)


Comment 40 Eric Sandeen 2007-04-13 23:07:28 UTC
Er, file /9/4/1 was on the test fs.  haven't located which inode was being
deleted from the root fs yet.

Comment 41 Marcus Barrow 2007-04-14 12:59:27 UTC
I installed your kernel on my test server and ran it over nite against
the repro.tar test scripts. It has now run over 15 hours and previously
the scripts casued a failure in 5 or 10 minutes.

Marcus Barrow

Comment 42 Eric Sandeen 2007-04-14 14:35:38 UTC
Marcus, good news.  I'll be interested to see how it fares on the full qa suite.
 I'll try to modify the reproducer to be better able to hit the sort of panic I
saw in the core, as well.


Comment 45 Mike Miller (OS Dev) 2007-04-17 18:15:12 UTC
Created attachment 152840 [details]
Kernel panic when trying to add 256 disks

We've observed similiar behaviour with cciss whem playing with a large number
of logical volumes. See the attached log.

Comment 46 Eric Sandeen 2007-04-17 20:23:06 UTC
Mike, that looks like a different path to list corruption.  I'd probably file a
new bug/issue.  Can you reproduce it?

This bug is pretty specific to a race in ext3; that's not what you're hitting.

Comment 48 Eric Sandeen 2007-04-17 23:43:19 UTC
Mike, also it looks like all your modules are unsigned... I'd try to reproduce
it on a stock (?) kernel if you file a new bug.


Comment 49 Andrew Patterson 2007-04-19 22:12:07 UTC
We are getting a similar issue on IA-64 on kernel 2.6.18-8.el5, but the stack
trace is different.  I applied the patch in comment #33 which seems to fix the

 sb orphan head is 1045149
sb_info orphan list:
 inode sda1:1045149 at e0000100501f6630: mode 100644, nlink 1, next 0
Assertion failure in ext3_put_super() at fs/ext3/super.c:420:
kernel BUG at fs/ext3/super.c:420!
umount[27419]: bugcheck! 0 [1]
Modules linked in: nfs lockd fscache nfs_acl dev_acpi(U) autofs4 hidp rfcomm
l2cap bluetooth sunrpc ipv6 vfat fat button parport_pc lp parport joydev sr_mod
cdrom shpchp e1000 sg dm_snapshot dm_zero dm_mirror dm_mod usb_storage qla2xxx
scsi_transport_fc cciss mptsas mptscsih scsi_transport_sas sd_mod scsi_mod
mptbase ext3 jbd ehci_hcd ohci_hcd uhci_hcd

Pid: 27419, CPU 1, comm:               umount
psr : 0000101008526030 ifs : 800000000000038e ip  : [<a0000002006a4980>]    Not
ip is at ext3_put_super+0x480/0x5a0 [ext3]
unat: 0000000000000000 pfs : 000000000000038e rsc : 0000000000000003
rnat: e00001006e229040 bsps: e00001006e228008 pr  : 000000000055a559
ldrs: 0000000000000000 ccv : 0000000000000010 fpsr: 0009804c8a70433f
csd : 0000000000000000 ssd : 0000000000000000
b0  : a0000002006a4980 b6  : a0000001000115a0 b7  : a00000010000bf00
f6  : 1003e00000000000000a0 f7  : 1003e20c49ba5e353f7cf
f8  : 1003e00000000000004e2 f9  : 1003e000000000fa00000
f10 : 1003e000000003b9aca00 f11 : 1003e431bde82d7b634db
r1  : a000000100bfe220 r2  : a000000100a157f0 r3  : a0000001009479e0
r8  : 0000000000000026 r9  : 00000000000000fd r10 : a000000100a15820
r11 : a000000100a15820 r12 : e00001006a897d90 r13 : e00001006a890000
r14 : a000000100a157f0 r15 : 0000000000000000 r16 : a0000001009479e8
r17 : e000010077887e18 r18 : 0000000000000000 r19 : 0000000000000000
r20 : 0000000000000000 r21 : a0000001009fe850 r22 : 0000000000000014
r23 : a000000100849200 r24 : a0000001009fe850 r25 : a000000100a157f8
r26 : a000000100a157f8 r27 : e00001006e229040 r28 : e00001006e228008
r29 : e000010077880060 r30 : e00001006e22802c r31 : e00001007788002c

Call Trace:
[<a000000100014140>] show_stack+0x40/0xa0
                               sp=e00001006a897920 bsp=e00001006a8913b8
[<a000000100014a40>] show_regs+0x840/0x880
                               sp=e00001006a897af0 bsp=e00001006a891360
[<a000000100037ce0>] die+0x1c0/0x2c0
                               sp=e00001006a897af0 bsp=e00001006a891318
[<a000000100037e30>] die_if_kernel+0x50/0x80
                               sp=e00001006a897b10 bsp=e00001006a8912e8
[<a000000100617c00>] ia64_bad_break+0x280/0x4a0
                               sp=e00001006a897b10 bsp=e00001006a8912c0
[<a00000010000c700>] __ia64_leave_kernel+0x0/0x280
                               sp=e00001006a897bc0 bsp=e00001006a8912c0
[<a0000002006a4980>] ext3_put_super+0x480/0x5a0 [ext3]
                               sp=e00001006a897d90 bsp=e00001006a891250
[<a000000100168b70>] generic_shutdown_super+0x1b0/0x2e0
                               sp=e00001006a897d90 bsp=e00001006a891220
[<a000000100168ce0>] kill_block_super+0x40/0x80
                               sp=e00001006a897d90 bsp=e00001006a8911f0
[<a000000100168f90>] deactivate_super+0x170/0x1c0
                               sp=e00001006a897d90 bsp=e00001006a8911c0
[<a00000010019da40>] mntput_no_expire+0xc0/0x200
                               sp=e00001006a897d90 bsp=e00001006a891190
[<a000000100178b80>] path_release_on_umount+0x40/0x60
                               sp=e00001006a897d90 bsp=e00001006a891170
[<a0000001001a10e0>] sys_umount+0x620/0x6e0
                               sp=e00001006a897d90 bsp=e00001006a8910f8
[<a00000010000c490>] __ia64_trace_syscall+0xd0/0x110
                               sp=e00001006a897e30 bsp=e00001006a8910f8
[<a000000000010620>] __start_ivt_text+0xffffffff00010620/0x400
                               sp=e00001006a898000 bsp=e00001006a8910f8
<0>Kernel panic - not syncing: Fatal exception

Comment 50 Eric Sandeen 2007-04-19 22:23:09 UTC
Andrew, yes, that looks like the original sort of backtrace from Bug 219650. 
The root cause is the same, hit different ways - corruption of the orphan inode


Comment 51 Eric Sandeen 2007-04-20 21:15:17 UTC
Any word on further testing w/ the Hazard suite?

Comment 54 Paul Hinchman 2007-05-02 21:18:27 UTC
After testing with the Hazard suite, HP Storage labs have a high degree of 
confidence that the BZ kernel resolved the ext3 problem.  It is our 
recommendation that this bug be closed.  We would like some visibility as to 
how/when the fix will be released.
Thanks for all your help.
Paul Hinchman

Comment 56 Paul Hinchman 2007-05-04 23:28:50 UTC
Once again, this bug can be closed and/or dup'd to the original one which is 
fixed by the same patch.  HP has a high degree of confidence that the fixes in 
the BZ kernel fixed the problem.
Paul Hinchman

Comment 57 Eric Sandeen 2007-05-04 23:34:53 UTC
Thanks Paul!


*** This bug has been marked as a duplicate of 219650 ***

Comment 58 Eric Sandeen 2009-06-16 03:55:15 UTC
As a note for future searchers, upstream commit 2988a7740dc0dd9a0cb56576e8fe1d777dff0db3 should have fixed this one.

Note You need to log in before you can comment on or make changes to this bug.