Bug 240433 - XFS buglet: bad unlock balance detected!
XFS buglet: bad unlock balance detected!
Status: CLOSED RAWHIDE
Product: Fedora
Classification: Fedora
Component: kernel (Show other bugs)
rawhide
All Linux
medium Severity medium
: ---
: ---
Assigned To: Eric Sandeen
Brian Brock
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2007-05-17 10:53 EDT by Jarod Wilson
Modified: 2007-11-30 17:12 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-06-18 10:46:31 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Jarod Wilson 2007-05-17 10:53:14 EDT
Description of problem:
Upon unmounting an xfs-formatted file-backed partition, I was greeted with the
following spew:

=====================================
[ BUG: bad unlock balance detected! ]
-------------------------------------
umount/25787 is trying to release lock (&(&ip->i_iolock)->mr_lock) at:
[<f9035f23>] xfs_iunlock+0x2d/0x6f [xfs]
but there are no more locks to release!

other info that might help us debug this:
2 locks held by umount/25787:
 #0:  (&type->s_umount_key#22){----}, at: [<c048007e>] deactivate_super+0x58/0x6f
 #1:  (&type->s_lock_key#13){--..}, at: [<c0614eba>] mutex_lock+0x21/0x24

stack backtrace:
 [<c04061e9>] show_trace_log_lvl+0x1a/0x2f
 [<c04067ad>] show_trace+0x12/0x14
 [<c0406831>] dump_stack+0x16/0x18
 [<c0440d67>] print_unlock_inbalance_bug+0xec/0xf9
 [<c0442bc8>] lock_release_non_nested+0x9e/0x162
 [<c0442dc9>] lock_release+0x13d/0x159
 [<c043b879>] up_read+0x16/0x29
 [<f9035f23>] xfs_iunlock+0x2d/0x6f [xfs]
 [<f903619b>] xfs_ireclaim+0x78/0x82 [xfs]
 [<f9052537>] xfs_finish_reclaim+0x124/0x12e [xfs]
 [<f9052661>] xfs_reclaim+0x6b/0xe0 [xfs]
 [<f905fc11>] xfs_fs_clear_inode+0x97/0xba [xfs]
 [<c048f5d5>] clear_inode+0xd3/0x122
 [<c048f852>] generic_drop_inode+0x11e/0x130
 [<c048ed32>] iput+0x63/0x66
 [<f90505ed>] xfs_unmount+0xdd/0x158 [xfs]
 [<f906033c>] vfs_unmount+0x1a/0x1e [xfs]
 [<f905f666>] xfs_fs_put_super+0x2e/0x69 [xfs]
 [<c047ff3a>] generic_shutdown_super+0x55/0xbe
 [<c047ffc3>] kill_block_super+0x20/0x32
 [<c0480083>] deactivate_super+0x5d/0x6f
 [<c0491401>] mntput_no_expire+0x42/0x72
 [<c0484516>] path_release_on_umount+0x15/0x18
 [<c0491ac1>] sys_umount+0x1e3/0x217
 [<c0491b0e>] sys_oldumount+0x19/0x1b
 [<c0405078>] syscall_call+0x7/0xb
 =======================


Version-Release number of selected component (if applicable):
kernel-2.6.21-1.3116.fc7.i686
(will try to reproduce with a more recent one as well)

How reproducible:
# dd if=/dev/zero of=/scratch/testfile bs=4k count=100000
# mkfs.xfs /scratch/testfile
# mkdir /mnt/testfile
# mount /scratch/testfile /mnt/testfile
# umount /mnt/testfile
Comment 1 Jarod Wilson 2007-05-17 11:33:22 EDT
So far, unable to reproduce with kernel 2.6.21-1.3163.fc7.i686... I've got
mount/unmount running in a loop, will let it go for a while...
Comment 2 Jarod Wilson 2007-06-18 10:46:31 EDT
Couldn't ever reproduce, appears it was just temporary rawhide breakage.

Note You need to log in before you can comment on or make changes to this bug.