Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 976189

Summary: statedump crashes in ioc_inode_dump
Product: [Community] GlusterFS Reporter: Pranith Kumar K <pkarampu>
Component: io-cacheAssignee: Raghavendra G <rgowdapp>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: gluster-bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.5.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 976292 (view as bug list) Environment:
Last Closed: 2014-04-17 11:42:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 976292    

Description Pranith Kumar K 2013-06-20 05:47:31 UTC
Description of problem:
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.3git
/lib64/libc.so.6[0x3479e32920]
/lib64/libc.so.6(gsignal+0x35)[0x3479e328a5]
/lib64/libc.so.6(abort+0x175)[0x3479e34085]
/lib64/libc.so.6[0x3479e2ba1e]
/lib64/libc.so.6(__assert_perror_fail+0x0)[0x3479e2bae0]
/usr/local/lib/libglusterfs.so.0(__inode_path+0x8e)[0x7f59ac0fd092]
/usr/local/lib/glusterfs/3.3git/xlator/performance/io-cache.so(ioc_inode_dump+0x14e)[0x7f59a6de2d91]
/usr/local/lib/libglusterfs.so.0(inode_dump+0x2d1)[0x7f59ac0fe407]
/usr/local/lib/libglusterfs.so.0(inode_table_dump+0x2ac)[0x7f59ac0fe6fb]
/usr/local/lib/glusterfs/3.3git/xlator/debug/io-stats.so(ios_itable_dump+0x39)[0x7f59a67a5cf0]
/usr/local/lib/libglusterfs.so.0(gf_proc_dump_xlator_info+0x169)[0x7f59ac11db86]
/usr/local/lib/libglusterfs.so.0(gf_proc_dump_info+0x4e0)[0x7f59ac11e89a]
/usr/local/sbin/glusterfs(glusterfs_sigwaiter+0x11a)[0x4082a9]
/lib64/libpthread.so.0[0x347a607851]
/lib64/libc.so.6(clone+0x6d)[0x3479ee890d]
---------


Version-Release number of selected component (if applicable):


How reproducible:
This is itermittent

Steps to Reproduce:
1. Untar linux kernel in a loop
2. take statedump of the mount every 2 hours
3.

Actual results:


Expected results:


Additional info:
It is debatable whether itable dumping should happen even for inodes that are not yet linked. So the fix may happen even there.

Comment 1 Anand Avati 2013-06-20 08:41:39 UTC
REVIEW: http://review.gluster.org/5241 (performance/io-cache: check for non-null gfid before calling inode_path) posted (#1) for review on master by Raghavendra G (raghavendra)

Comment 2 Anand Avati 2013-06-20 09:22:29 UTC
REVIEW: http://review.gluster.org/5241 (performance/io-cache: check for non-null gfid before calling inode_path) posted (#2) for review on master by Raghavendra G (raghavendra)

Comment 3 Anand Avati 2013-06-20 09:24:42 UTC
REVIEW: http://review.gluster.org/5241 (performance/io-cache: check for non-null gfid before calling inode_path) posted (#3) for review on master by Raghavendra G (raghavendra)

Comment 4 Anand Avati 2013-07-11 02:45:07 UTC
COMMIT: http://review.gluster.org/5241 committed in master by Anand Avati (avati) 
------
commit 02c0b6f0fcd6e9c678b170a8150d2b79942724ef
Author: Raghavendra G <raghavendra>
Date:   Thu Jun 20 14:04:10 2013 +0530

    performance/io-cache: check for non-null gfid before calling inode_path
    
    A new non-linked inode is added to lru list. Hence it might be possible
    that gfid might be NULL when inode_dump is called. To pass asserts in
    inode_path, we've to check for non-null gfid before invoking that
    procedure.
    
    Signed-off-by: Raghavendra G <raghavendra>
    Change-Id: Iff14efc6d6e2faa33b9f7a81e0a66f6a947b77ed
    BUG: 976189
    Reviewed-on: http://review.gluster.org/5241
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Anand Avati <avati>

Comment 5 Niels de Vos 2014-04-17 11:42:52 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user