Bug 964038 - longevity: glusterfsd was hung for more than 120s which made the node unreachable.
Summary: longevity: glusterfsd was hung for more than 120s which made the node unreach...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.0
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: amainkar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-05-17 06:27 UTC by Vijaykumar Koppad
Modified: 2015-03-23 07:40 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-05-28 07:31:42 UTC
Embargoed:


Attachments (Terms of Use)

Description Vijaykumar Koppad 2013-05-17 06:27:05 UTC
Description of problem: In longevity test-run we had been running for a month, one of the client was hung, which was doing rm -rf on the mount point, was stuck there for long time. One of the rhs node was not reachable. 

This is dmesg we got after the reboot, 

INFO: task glusterfsd:1730 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
glusterfsd    D 0000000000000002     0  1730      1 0x00000000
 ffff880119bcd698 0000000000000082 0000000000000000 0000000000000000
 0000000001000000 ffff880028313b00 0000000000000001 ffff880028315f80
 ffff88011877fab8 ffff880119bcdfd8 000000000000f4e8 ffff88011877fab8
Call Trace:
 [<ffffffff814eea15>] schedule_timeout+0x215/0x2e0
 [<ffffffff814ef932>] __down+0x72/0xb0
 [<ffffffffa01ab402>] ? _xfs_buf_find+0x102/0x280 [xfs]
 [<ffffffff81096e01>] down+0x41/0x50
 [<ffffffffa01ab243>] xfs_buf_lock+0x53/0x110 [xfs]
 [<ffffffffa01ab402>] _xfs_buf_find+0x102/0x280 [xfs]
 [<ffffffffa01ab5eb>] xfs_buf_get+0x6b/0x1a0 [xfs]
 [<ffffffffa01abc5c>] xfs_buf_read+0x2c/0x100 [xfs]
 [<ffffffffa01a1870>] xfs_trans_read_buf+0x190/0x400 [xfs]
 [<ffffffffa0175069>] xfs_da_do_buf+0x279/0x770 [xfs]
 [<ffffffffa01755da>] xfs_da_read_buf+0x2a/0x30 [xfs]
 [<ffffffffa01765f3>] ? xfs_da_node_lookup_int+0xa3/0x2b0 [xfs]
 [<ffffffffa01765f3>] xfs_da_node_lookup_int+0xa3/0x2b0 [xfs]
 [<ffffffffa017d9ba>] xfs_dir2_node_lookup+0x5a/0x150 [xfs]
 [<ffffffffa0177c06>] ? xfs_dir2_isleaf+0x26/0x60 [xfs]
 [<ffffffffa0178598>] xfs_dir_lookup+0x168/0x170 [xfs]
 [<ffffffffa01a4737>] xfs_lookup+0x87/0x110 [xfs]
 [<ffffffffa01b1274>] xfs_vn_lookup+0x54/0xa0 [xfs]
 [<ffffffff8118526c>] do_lookup+0x16c/0x1e0
 [<ffffffff81185b14>] __link_path_walk+0x734/0x1030
 [<ffffffff81216282>] ? selinux_inode_permission+0x72/0xb0
 [<ffffffff8118669a>] path_walk+0x6a/0xe0
 [<ffffffff8118686b>] do_path_lookup+0x5b/0xa0
 [<ffffffff811874d7>] user_path_at+0x57/0xa0
 [<ffffffff81070ac7>] ? current_fs_time+0x27/0x30
 [<ffffffffa0178299>] ? xfs_readdir+0xf9/0x130 [xfs]
 [<ffffffff8117c0c6>] vfs_fstatat+0x46/0x80
 [<ffffffff8118abd0>] ? filldir+0x0/0xe0
 [<ffffffff8117c124>] sys_newfstatat+0x24/0x50
 [<ffffffff8118b00e>] ? sys_getdents+0xbe/0xf0
 [<ffffffff814f0995>] ? page_fault+0x25/0x30
 [<ffffffff8100b0f2>] system_call_fastpath+0x16/0x1b
INFO: task glusterfsd:1733 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
glusterfsd    D 0000000000000001     0  1733      1 0x00000000
 ffff8801192b1698 0000000000000082 0000000000000000 0000000000000000
 0000000001000000 ffff880028293b00 0000000000000001 ffff880028295f80
 ffff880118719b38 ffff8801192b1fd8 000000000000f4e8 ffff880118719b38
Call Trace:
 [<ffffffff814eea15>] schedule_timeout+0x215/0x2e0
 [<ffffffff814ef932>] __down+0x72/0xb0
 [<ffffffffa01ab402>] ? _xfs_buf_find+0x102/0x280 [xfs]
 [<ffffffff81096e01>] down+0x41/0x50
 [<ffffffffa01ab243>] xfs_buf_lock+0x53/0x110 [xfs]
 [<ffffffffa01ab402>] _xfs_buf_find+0x102/0x280 [xfs]
 [<ffffffffa01ab5eb>] xfs_buf_get+0x6b/0x1a0 [xfs]
 [<ffffffffa01abc5c>] xfs_buf_read+0x2c/0x100 [xfs]
 [<ffffffffa01a1870>] xfs_trans_read_buf+0x190/0x400 [xfs]
 [<ffffffffa0175069>] xfs_da_do_buf+0x279/0x770 [xfs]
 [<ffffffffa01755da>] xfs_da_read_buf+0x2a/0x30 [xfs]
 [<ffffffffa01765f3>] ? xfs_da_node_lookup_int+0xa3/0x2b0 [xfs]
 [<ffffffffa01765f3>] xfs_da_node_lookup_int+0xa3/0x2b0 [xfs]
 [<ffffffffa017d9ba>] xfs_dir2_node_lookup+0x5a/0x150 [xfs]
 [<ffffffffa0177c06>] ? xfs_dir2_isleaf+0x26/0x60 [xfs]
 [<ffffffffa0178598>] xfs_dir_lookup+0x168/0x170 [xfs]
 [<ffffffffa01a4737>] xfs_lookup+0x87/0x110 [xfs]
 [<ffffffffa01b1274>] xfs_vn_lookup+0x54/0xa0 [xfs]
 [<ffffffff8118526c>] do_lookup+0x16c/0x1e0
 [<ffffffff81185b14>] __link_path_walk+0x734/0x1030
 [<ffffffff81216282>] ? selinux_inode_permission+0x72/0xb0
 [<ffffffff8118669a>] path_walk+0x6a/0xe0
 [<ffffffff8118686b>] do_path_lookup+0x5b/0xa0
 [<ffffffff811874d7>] user_path_at+0x57/0xa0
 [<ffffffff81070ac7>] ? current_fs_time+0x27/0x30
 [<ffffffffa0178299>] ? xfs_readdir+0xf9/0x130 [xfs]
 [<ffffffff8117c0c6>] vfs_fstatat+0x46/0x80
 [<ffffffff8118abd0>] ? filldir+0x0/0xe0
 [<ffffffff8117c124>] sys_newfstatat+0x24/0x50
 [<ffffffff8118b00e>] ? sys_getdents+0xbe/0xf0
 [<ffffffff814f0995>] ? page_fault+0x25/0x30
 [<ffffffff8100b0f2>] system_call_fastpath+0x16/0x1b

In one of the brick logs we got these error message (actually we got these error
messages in all brick logs in that node) 


[2013-05-13 16:19:48.117550] E [posix-helpers.c:420:posix_pstat] 0-long1-posix: lstat failed on /brick
s/long1_brick0/ and return value is 5 instead of -1. Please see dmesg output to check whether the fail
ure is due to backend filesystem issue
[2013-05-13 16:19:48.117571] E [posix.c:154:posix_lookup] 0-long1-posix: lstat on /bricks/long1_brick0
/ failed: Success
[2013-05-13 16:19:48.117588] I [server3_1-fops.c:151:server_lookup_cbk] 0-long1-server: 883: LOOKUP / 
(00000000-0000-0000-0000-000000000001) ==> -1 (Success)
[2013-05-13 16:19:50.017966] E [posix-helpers.c:420:posix_pstat] 0-long1-posix: lstat failed on /brick
s/long1_brick0/ and return value is 5 instead of -1. Please see dmesg output to check whether the fail
ure is due to backend filesystem issue
[2013-05-13 16:19:50.018032] E [posix.c:154:posix_lookup] 0-long1-posix: lstat on /bricks/long1_brick0
/ failed: Success
[2013-05-13 16:19:50.018093] E [posix-helpers.c:420:posix_pstat] 0-long1-posix: lstat failed on /brick
s/long1_brick0/ and return value is 5 instead of -1. Please see dmesg output to check whether the fail
ure is due to backend filesystem issue
[2013-05-13 16:19:50.018121] E [posix.c:154:posix_lookup] 0-long1-posix: lstat on /bricks/long1_brick0
/ failed: Success
[2013-05-13 16:19:50.018151] I [server3_1-fops.c:151:server_lookup_cbk] 0-long1-server: 878: LOOKUP / 
(00000000-0000-0000-0000-000000000001) ==> -1 (Success)
[2013-05-13 16:20:03.294487] E [posix-helpers.c:420:posix_pstat] 0-long1-posix: lstat failed on /brick
s/long1_brick0/ and return value is 5 instead of -1. Please see dmesg output to check whether the fail
ure is due to backend filesystem issue
[2013-05-13 16:20:03.294556] E [posix.c:154:posix_lookup] 0-long1-posix: lstat on /bricks/long1_brick0
/ failed: Success
[2013-05-13 16:20:03.294622] E [posix-helpers.c:420:posix_pstat] 0-long1-posix: lstat failed on /brick
s/long1_brick0/ and return value is 5 instead of -1. Please see dmesg output to check whether the fail
ure is due to backend filesystem issue


Before these error messages, there were error messages like this in brick-log file 

[2013-05-13 16:14:36.286563] W [server-resolve.c:130:resolve_gfid_cbk] 0-long1-server: 0e1490ff-fe69-4407-9e9c-30d9a9ae9e79: failed to resolve (Input/output error)
[2013-05-13 16:14:36.286573] I [server3_1-fops.c:1541:server_open_cbk] 0-long1-server: 1084930: OPEN (null) (--) ==> -1 (No such file or directory)
[2013-05-13 16:14:36.287174] W [posix-helpers.c:355:posix_istat] 0-long1-posix: lstat failed on /bricks/long1_brick0/.glusterfs/84/82/8482e74b-e261-4367-8c09-baf3f098d34e (Input/output error)
[2013-05-13 16:14:36.287199] E [posix.c:154:posix_lookup] 0-long1-posix: lstat on /bricks/long1_brick0/.glusterfs/84/82/8482e74b-e261-4367-8c09-baf3f098d34e failed: Input/output error
[2013-05-13 16:14:36.287211] W [server-resolve.c:130:resolve_gfid_cbk] 0-long1-server: 8482e74b-e261-4367-8c09-baf3f098d34e: failed to resolve (Input/output error)
[2013-05-13 16:14:36.287454] W [posix-helpers.c:355:posix_istat] 0-long1-posix: lstat failed on /bricks/long1_brick0/.glusterfs/84/82/8482e74b-e261-4367-8c09-baf3f098d34e (Input/output error)
[2013-05-13 16:14:36.287478] E [posix.c:154:posix_lookup] 0-long1-posix: lstat on /bricks/long1_brick0/.glusterfs/84/82/8482e74b-e261-4367-8c09-baf3f098d34e failed: Input/output error
[2013-05-13 16:14:36.287499] W [server-resolve.c:130:resolve_gfid_cbk] 0-long1-server: 8482e74b-e261-4367-8c09-baf3f098d34e: failed to resolve (Input/output error)
[2013-05-13 16:14:36.287510] I [server3_1-fops.c:252:server_inodelk_cbk] 0-long1-server: 1084932: INODELK (null) (--) ==> -1 (No such file or directory)
[2013-05-13 16:14:36.287918] W [posix-helpers.c:355:posix_istat] 0-long1-posix: lstat failed on /bricks/long1_brick0/.glusterfs/84/82/8482e74b-e261-4367-8c09-baf3f098d34e (Input/output error)



Version-Release number of selected component (if applicable):glusterfs-3.3.0.7rhs-1.el6rhs.x86_64


How reproducible:  It was happened in longevity test-run. 


Steps to Reproduce:
Since this happened during longevity test-run, We are not sure about the exact steps. This we observed when it was hung doing rm -rf on the fuse-mount point in test1.
What this test does is,  

1. It creates 10*10*10000 files, 10 directories in one level and 10 directory depth and 10K file in each directory.
2.Then does operations on those files, creating symlinks , hardlinks , chmod , chgrp, chown
3. 
  
Actual results: glusterfsd process was hung which made node unreachable.


Expected results: glusterfsd should run properly. 


Additional info:

Comment 3 Amar Tumballi 2013-05-21 05:59:29 UTC
Vijay,

Keeping it in your name to follow up with RHEL team.

[2013-05-13 16:20:03.294622] E [posix-helpers.c:420:posix_pstat] 0-long1-posix: lstat failed on /bricks/long1_brick0/ and return value is 5 instead of -1. Please see dmesg output to check whether the failure is due to backend filesystem issue

Comment 4 Vijay Bellur 2013-05-28 07:31:42 UTC

*** This bug has been marked as a duplicate of bug 908158 ***

Comment 5 Brian Foster 2013-05-28 11:53:28 UTC
Perhaps I'm missing something, but what suggests this bug is a duplicate of 908158? I'm going from memory a bit here so I could be wrong, but IIRC 908158 was an fs shutdown and wasn't associated with any kind of hangs or stalls on its own...

Is the following error observed in the system logs for this issue?

    xfs_iunlink_remove: xfs_inotobp() returned error 22.

Comment 6 Brian Foster 2013-05-31 20:03:19 UTC
The report doesn't indicate which server had the issue, but from the messages files in the sosreports it appears to be rhs01:

May 17 11:04:10 longevity-rhs01 kernel: INFO: task glusterfsd:1730 blocked for more than 120 seconds.
May 17 11:04:10 longevity-rhs01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 17 11:04:10 longevity-rhs01 kernel: glusterfsd    D 0000000000000002     0  1730      1 0x00000000
May 17 11:04:10 longevity-rhs01 kernel: ffff880119bcd698 0000000000000082 0000000000000000 0000000000000000
May 17 11:04:10 longevity-rhs01 kernel: 0000000001000000 ffff880028313b00 0000000000000001 ffff880028315f80
May 17 11:04:10 longevity-rhs01 kernel: ffff88011877fab8 ffff880119bcdfd8 000000000000f4e8 ffff88011877fab8
May 17 11:04:10 longevity-rhs01 kernel: Call Trace:
May 17 11:04:10 longevity-rhs01 kernel: [<ffffffff814eea15>] schedule_timeout+0x215/0x2e0
May 17 11:04:10 longevity-rhs01 kernel: [<ffffffff814ef932>] __down+0x72/0xb0
May 17 11:04:10 longevity-rhs01 kernel: [<ffffffffa01ab402>] ? _xfs_buf_find+0x102/0x280 [xfs]
May 17 11:04:10 longevity-rhs01 kernel: [<ffffffff81096e01>] down+0x41/0x50
...

There are hangs prior to this in the log, e.g.:

May 17 10:34:23 longevity-rhs01 kernel: INFO: task glusterfsd:1802 blocked for more than 120 seconds.

... and others, going back to the following (which looks similar to bug 967593, except at unlink rather than log recovery and slightly different free space conditions):

May 12 13:53:05 longevity-rhs01 kernel: XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1638 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0151801
May 12 13:53:05 longevity-rhs01 kernel:
May 12 13:53:05 longevity-rhs01 kernel: Pid: 1727, comm: glusterfsd Not tainted 2.6.32-220.32.1.el6.x86_64 #1
May 12 13:53:05 longevity-rhs01 kernel: Call Trace:
May 12 13:53:05 longevity-rhs01 kernel: [<ffffffffa0178f1f>] ? xfs_error_report+0x3f/0x50 [xfs]
May 12 13:53:05 longevity-rhs01 kernel: [<ffffffffa0151801>] ? xfs_free_extent+0x101/0x130 [xfs]
May 12 13:53:05 longevity-rhs01 kernel: [<ffffffffa014e709>] ? xfs_alloc_lookup_eq+0x19/0x20 [xfs]
May 12 13:53:05 longevity-rhs01 kernel: [<ffffffffa014f946>] ? xfs_free_ag_extent+0x626/0x750 [xfs]
May 12 13:53:05 longevity-rhs01 kernel: [<ffffffffa0151801>] ? xfs_free_extent+0x101/0x130 [xfs]
May 12 13:53:05 longevity-rhs01 kernel: [<ffffffffa015ae4d>] ? xfs_bmap_finish+0x15d/0x1a0 [xfs]
May 12 13:53:05 longevity-rhs01 kernel: [<ffffffffa019adaf>] ? xfs_remove+0x2af/0x3a0 [xfs]
May 12 13:53:05 longevity-rhs01 kernel: [<ffffffffa01a8718>] ? xfs_vn_unlink+0x48/0x90 [xfs]
May 12 13:53:05 longevity-rhs01 kernel: [<ffffffff8118481f>] ? vfs_unlink+0x9f/0xe0
May 12 13:53:05 longevity-rhs01 kernel: [<ffffffff8118356a>] ? lookup_hash+0x3a/0x50
May 12 13:53:05 longevity-rhs01 kernel: [<ffffffff81186da3>] ? do_unlinkat+0x183/0x1c0
May 12 13:53:05 longevity-rhs01 kernel: [<ffffffff8117c1a6>] ? sys_newlstat+0x36/0x50
May 12 13:53:05 longevity-rhs01 kernel: [<ffffffff81186df6>] ? sys_unlink+0x16/0x20
May 12 13:53:05 longevity-rhs01 kernel: [<ffffffff8100b0f2>] ? system_call_fastpath+0x16/0x1b
May 12 13:53:05 longevity-rhs01 kernel: XFS (dm-2): xfs_do_force_shutdown(0x8) called from line 3859 of file fs/xfs/xfs_bmap.c.  Return address = 0xffffffffa015ae86
May 12 13:53:05 longevity-rhs01 kernel: XFS (dm-2): Corruption of in-memory data detected.  Shutting down filesystem
May 12 13:53:05 longevity-rhs01 kernel: XFS (dm-2): Please umount the filesystem and rectify the problem(s)

The hung task messages start right around this time and are interleaved with xfs_log_force() errors due to the shutdown until May 13 17:16:00, at which point the system resets, the log is replayed (no errors) and the same restart/log replay sequence repeats a few times until the original hung task message reported above.

Given all that, I suspect the initial issue here is the aforementioned corruption and this should probably be marked as a duplicate of bug 967593.

Vijaykumar or Sudhir, was there any recovery (i.e., xfs_repair) attempt  or reformat after the corruption but before the more recent hangs? If not and nobody else objects, I'll plan to dup this to 967593.

Comment 7 Vijaykumar Koppad 2013-06-03 05:42:09 UTC
Brian, there was no attempt of recovery or reformat.

Comment 11 Vivek Agarwal 2015-03-23 07:38:57 UTC
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html

Comment 12 Vivek Agarwal 2015-03-23 07:40:04 UTC
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html


Note You need to log in before you can comment on or make changes to this bug.