Description of problem: I got into a state where genesis was stuck with one process trying to write to an unlinked inode. The hung process was on doral-p2. All other nodes in the cluster finished their workloads. Everything seems to be stuck on process 2768 on doral-p2: There are 3 glocks with waiters. doral-p2, pid 2759 is waiting for glock 6/bd4aabb, which is held by pid 2760 doral-p2, pid 2744 is waiting for glock 2/bd7f6e8, which is held by pid 2768 doral-p2, pid 2765 is waiting for glock 3/bd404db, which is held by pid 2768 doral-p2, pid 2767 is waiting for glock 3/bd404db, which is held by pid 2768 doral-p2, pid 2760 is waiting for glock 3/bd404db, which is held by pid 2768 doral-p2, pid 2763 is waiting for glock 3/bd404db, which is held by pid 2768 doral-p2, pid 2766 is waiting for glock 3/bd404db, which is held by pid 2768 doral-p2, pid 2764 is waiting for glock 3/bd404db, which is held by pid 2768 [root@doral-p2 ~]# ls -l /proc/2768/fd total 0 lrwx------ 1 root root 64 Jun 11 08:29 0 -> socket:[1486068] lrwx------ 1 root root 64 Jun 11 08:29 1 -> socket:[1486069] lrwx------ 1 root root 64 Jun 11 08:29 2 -> socket:[1486069] lrwx------ 1 root root 64 Jun 11 08:29 3 -> /mnt/brawl/doral-p2/gendir_41/qahaupevichabhulhqdejbtp (deleted) genesis D 000000000ff11098 10080 2768 2699 2767 (NOTLB) Call Trace: [C000000000EB6D90] [C000000000576FF0] 0xc000000000576ff0 (unreliable) [C000000000EB6F60] [C000000000010B0C] .__switch_to+0x124/0x148 [C000000000EB6FF0] [C0000000003D0748] .schedule+0xc08/0xdbc [C000000000EB7100] [C00000000011DA64] .inode_wait+0x10/0x28 [C000000000EB7170] [C0000000003D1870] .__wait_on_bit+0xa0/0x114 [C000000000EB7220] [C0000000003D197C] .out_of_line_wait_on_bit+0x98/0xc8 [C000000000EB7320] [C00000000011E06C] .ifind+0xc4/0x118 [C000000000EB73D0] [C00000000011F3D0] .iget5_locked+0xa0/0x250 [C000000000EB74A0] [D000000000EAEB14] .gfs2_inode_lookup+0x60/0x284 [gfs2] [C000000000EB75A0] [D000000000EC5130] .try_rgrp_unlink+0xd0/0x11c [gfs2] [C000000000EB7650] [D000000000EC5BFC] .gfs2_inplace_reserve_i+0x320/0x818 [gfs2] [C000000000EB77B0] [D000000000EB74F4] .gfs2_write_begin+0x140/0x33c [gfs2] [C000000000EB7880] [D000000000EB9280] .gfs2_file_buffered_write+0x110/0x2f8 [gfs2] [C000000000EB79B0] [D000000000EB96E8] .__gfs2_file_aio_write_nolock+0x280/0x2f0 [gfs2] [C000000000EB7AB0] [D000000000EB97F0] .gfs2_file_write_nolock+0x98/0x12c [gfs2] [C000000000EB7C30] [D000000000EB99E0] .gfs2_file_write+0x60/0xf4 [gfs2] [C000000000EB7CF0] [C0000000000F8E24] .vfs_write+0x118/0x200 [C000000000EB7D90] [C0000000000F9594] .sys_write+0x4c/0x8c [C000000000EB7E30] [C0000000000086A4] syscall_exit+0x0/0x40 I'll include complete stack traces and lock dumps in an attachment. Version-Release number of selected component (if applicable): kernel-2.6.18-150.el5.bz502944 How reproducible: Unknown Actual results: Expected results: Additional info:
Created attachment 347409 [details] lock dumps and stack traces This tarball includes the GFS2 and DLM dumps from all nodes and the stack traces from doral-p2.
I can see whats going on. During allocation for the unlinked inode, its actually found itself in the unlinked inode list and the look up has got stuck while trying to look itself up to unlink itself. Probably not too tricky to fix.
Patch posted to cluster-devel
Nate, the fix for this was in abhi's latest kernel, can you test it to make sure it fixes this?
Created attachment 361023 [details] patch posted to rhkernel-list
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux maintenance release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Update release for currently deployed products. This request is not yet committed for inclusion in an Update release.
in kernel-2.6.18-169.el5 You can download this test kernel from http://people.redhat.com/dzickus/el5 Please do NOT transition this bugzilla state to VERIFIED until our QE team has sent specific instructions indicating when to do so. However feel free to provide a comment indicating that this fix has been verified.
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHSA-2010-0178.html