Bug 520985 - GFS: fatal: assertion "gfs_glock_is_locked_by_me(gl) && gfs_gl ock_is_held_excl(gl)" failed
Summary: GFS: fatal: assertion "gfs_glock_is_locked_by_me(gl) && gfs_gl ock_is_held_ex...
Keywords:
Status: CLOSED DUPLICATE of bug 471258
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: gfs-kmod
Version: 5.4
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Robert Peterson
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-09-03 02:42 UTC by Nate Straz
Modified: 2010-01-12 03:31 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2009-09-03 15:46:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Nate Straz 2009-09-03 02:42:49 UTC
Description of problem:

While running through our normal regression loads with SE Linux in permissive mode, I ran into this panic during dd_lock with 1k block size.

GFS: fsid=tankmorph:brawl0.3: fatal: assertion "gfs_glock_is_locked_by_me(gl) && gfs_gl
ock_is_held_excl(gl)" failed
GFS: fsid=tankmorph:brawl0.3:   function = gfs_trans_add_gl
GFS: fsid=tankmorph:brawl0.3:   file = /builddir/build/BUILD/gfs-kmod-0.1.34/_kmod_buil
d_PAE/src/gfs/trans.c, line = 237
GFS: fsid=tankmorph:brawl0.3:   time = 1251931089
GFS: fsid=tankmorph:brawl0.3: about to withdraw from the cluster
------------[ cut here ]------------
kernel BUG at /builddir/build/BUILD/gfs-kmod-0.1.34/_kmod_build_PAE/src/gfs/lm.c:110!
invalid opcode: 0000 [#1]
SMP
last sysfs file: /devices/pci0000:00/0000:00:02.0/0000:01:1f.0/0000:03:02.1/irq
Modules linked in: sctp dm_log_clustered(U) lock_nolock gfs(U) lock_dlm gfs2 dlm xt_tcp
udp xt_state ip_conntrack nfnetlink iptable_filter ip_tables x_tables gnbd(U) configfs
autofs4 hidp rfcomm l2cap bluetooth lockd sunrpc ipv6 xfrm_nalgo crypto_api dm_multipat
h scsi_dh video hwmon backlight sbs i2c_ec button battery asus_acpi ac lp e1000 parport
_pc ide_cd parport i2c_i801 floppy cdrom e7xxx_edac intel_rng edac_mc i2c_core pcspkr s
g dm_raid45 dm_message dm_region_hash dm_mem_cache dm_snapshot dm_zero dm_mirror dm_log
 dm_mod qla2xxx scsi_transport_fc ata_piix libata sd_mod scsi_mod ext3 jbd uhci_hcd ohc
i_hcd ehci_hcd
CPU:    2
EIP:    0060:[<f9108b44>]    Tainted: G      VLI
EFLAGS: 00010202   (2.6.18-164.el5PAE #1)
EIP is at gfs_lm_withdraw+0x48/0x82 [gfs]
eax: 00000044   ebx: f900f000   ecx: 00000086   edx: 00000000
esi: f902b784   edi: eed399a8   ebp: f900f000   esp: e8442cfc
ds: 007b   es: 007b   ss: 0068
Process writeread (pid: 3487, ti=e8442000 task=e94dc550 task.ti=e8442000)
Stack: e8442d14 f902b784 eed399cc f911ef62 f900f000 f9126319 f902b784 f9125cfd
       f902b784 f912040c f902b784 f9125cb7 000000ed f902b784 4a9ef3d1 eed399a8
       f911dd9c f9125cb7 000000ed f4bc6d4c d0c7fda0 f911de46 00000000 f32e181c
Call Trace:
 [<f911ef62>] gfs_assert_withdraw_i+0x26/0x31 [gfs]
 [<f911dd9c>] gfs_trans_add_gl+0x6c/0x98 [gfs]
 [<f911de46>] gfs_trans_add_bh+0x7e/0xa2 [gfs]
 [<f90fca91>] ea_set_simple+0xef/0x1ff [gfs]
 [<f90fc9a2>] ea_set_simple+0x0/0x1ff [gfs]
 [<f90fb277>] ea_foreach_i+0x9c/0xeb [gfs]
 [<f90fb31d>] ea_foreach+0x57/0x16d [gfs]
 [<f90fc9a2>] ea_set_simple+0x0/0x1ff [gfs]
 [<f90fb31d>] ea_foreach+0x57/0x16d [gfs]
 [<f90fb427>] ea_foreach+0x161/0x16d [gfs]
 [<f90fc8ee>] ea_set_i+0x32/0x93 [gfs]
 [<f90fcdf6>] gfs_ea_set_i+0xc5/0x15d [gfs]
 [<f9113c31>] gfs_security_init+0x8d/0xae [gfs]
 [<f9115119>] gfs_create+0x12d/0x193 [gfs]
 [<c047f68f>] vfs_create+0xc8/0x12f
 [<c048203b>] open_namei+0x16a/0x5fb
 [<c0471622>] __dentry_open+0xea/0x1ab
 [<c0471772>] do_filp_open+0x1c/0x31
 [<c04717c5>] do_sys_open+0x3e/0xae
 [<c0471862>] sys_open+0x16/0x18
 [<c0404f17>] syscall_call+0x7/0xb
 =======================


Version-Release number of selected component (if applicable):
kernel-2.6.18-164.el5
kmod-gfs-0.1.34-2.el5


How reproducible:
Unknown

Steps to Reproduce:
1. make a gfs file system with a 1k file system block size and mount
2. run writeread test from multiple nodes
  
Actual results:


Expected results:


Additional info:

Comment 1 Robert Peterson 2009-09-03 15:46:14 UTC
I'm closing this as a duplicate of bug #471258.  Perhaps you
found a more reliable way to recreate the failure.

*** This bug has been marked as a duplicate of bug 471258 ***


Note You need to log in before you can comment on or make changes to this bug.