Bug 1221457 - nfs-ganesha+posix: glusterfsd crash while executing the posix testuite
Summary: nfs-ganesha+posix: glusterfsd crash while executing the posix testuite
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: upcall
Version: 3.7.0
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-14 05:11 UTC by Saurabh
Modified: 2017-03-06 07:08 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-06 07:08:38 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
coredump of glusterfsd (690.26 KB, application/x-xz)
2015-05-14 05:11 UTC, Saurabh
no flags Details

Description Saurabh 2015-05-14 05:11:01 UTC
Created attachment 1025278 [details]
coredump of glusterfsd

Description of problem:
with nfs-ganesha spawned and posix test suite execution on mount-point, I found several failures being mentioned on the stdout and coredump for glusterfsd process.

(gdb) bt
#0  upcall_cache_invalidate (frame=0x7f92ff28208c, this=0x7f92f00126c0, client=0x0, inode=0x7f92dc40806c, flags=32, stbuf=0x0, p_stbuf=0x0, oldp_stbuf=0x0) at upcall-internal.c:496
#1  0x00007f92f407db42 in up_lookup_cbk (frame=0x7f92ff28208c, cookie=<value optimized out>, this=0x7f92f00126c0, op_ret=0, op_errno=22, inode=0x7f92dc40806c, stbuf=0x7f92cdff6a40, xattr=0x7f92fec7be80, 
    postparent=0x7f92cdff69d0) at upcall.c:767
#2  0x00007f92f428ceb3 in pl_lookup_cbk (frame=0x7f92ff283560, cookie=<value optimized out>, this=<value optimized out>, op_ret=0, op_errno=22, inode=0x7f92dc40806c, buf=0x7f92cdff6a40, 
    xdata=0x7f92fec7be80, postparent=0x7f92cdff69d0) at posix.c:2036
#3  0x00007f92f44a6d0e in posix_acl_lookup_cbk (frame=0x7f92ff282138, cookie=<value optimized out>, this=0x7f92f000ff80, op_ret=0, op_errno=22, inode=0x7f92dc40806c, buf=0x7f92cdff6a40, 
    xattr=0x7f92fec7be80, postparent=0x7f92cdff69d0) at posix-acl.c:806
#4  0x00007f92f46b2cc9 in br_stub_lookup_cbk (frame=0x7f92ff2827f0, cookie=<value optimized out>, this=<value optimized out>, op_ret=0, op_errno=22, inode=0x7f92dc40806c, stbuf=0x7f92cdff6a40, 
    xattr=0x7f92fec7be80, postparent=0x7f92cdff69d0) at bit-rot-stub.c:1639
#5  0x00007f92f4f7dac3 in ctr_lookup_cbk (frame=0x7f92ff281fe0, cookie=<value optimized out>, this=<value optimized out>, op_ret=0, op_errno=22, inode=0x7f92dc40806c, buf=0x7f92cdff6a40, 
    dict=0x7f92fec7be80, postparent=0x7f92cdff69d0) at changetimerecorder.c:259
#6  0x00007f92f53bde68 in posix_lookup (frame=0x7f92ff2825ec, this=<value optimized out>, loc=0x7f92fed08e14, xdata=<value optimized out>) at posix.c:208
#7  0x00007f93003f266d in default_lookup (frame=0x7f92ff2825ec, this=0x7f92f0008ed0, loc=0x7f92fed08e14, xdata=0x7f92fec7bf98) at defaults.c:2139
#8  0x00007f92f4f83038 in ctr_lookup (frame=0x7f92ff281fe0, this=0x7f92f000a500, loc=0x7f92fed08e14, xdata=0x7f92fec7bf98) at changetimerecorder.c:310
#9  0x00007f93003f266d in default_lookup (frame=0x7f92ff281fe0, this=0x7f92f000cc00, loc=0x7f92fed08e14, xdata=0x7f92fec7bf98) at defaults.c:2139
#10 0x00007f92f46b1b69 in br_stub_lookup (frame=<value optimized out>, this=0x7f92f000eb10, loc=0x7f92fed08e14, xdata=0x7f92fec7bf98) at bit-rot-stub.c:1690
#11 0x00007f92f44a5832 in posix_acl_lookup (frame=0x7f92ff282138, this=0x7f92f000ff80, loc=0x7f92fed08e14, xattr=<value optimized out>) at posix-acl.c:858
#12 0x00007f92f428c642 in pl_lookup (frame=0x7f92ff283560, this=0x7f92f0011390, loc=0x7f92fed08e14, xdata=0x7f92fec7bf98) at posix.c:2080
#13 0x00007f92f407a0b2 in up_lookup (frame=0x7f92ff28208c, this=0x7f92f00126c0, loc=0x7f92fed08e14, xattr_req=0x7f92fec7bf98) at upcall.c:793
#14 0x00007f93003f4c9c in default_lookup_resume (frame=0x7f92ff2819d4, this=0x7f92f0013b30, loc=0x7f92fed08e14, xdata=0x7f92fec7bf98) at defaults.c:1694
#15 0x00007f9300411b60 in call_resume (stub=0x7f92fed08dd4) at call-stub.c:2576
#16 0x00007f92efdfc398 in iot_worker (data=0x7f92f004e950) at io-threads.c:214
#17 0x00000036e2e079d1 in start_thread () from /lib64/libpthread.so.0
#18 0x00000036e2ae88fd in clone () from /lib64/libc.so.6

[root@nfs1 ~]# cat /etc/ganesha/exports/export.vol2.conf 
# WARNING : Using Gluster CLI will overwrite manual
# changes made to this file. To avoid it, edit the
# file, copy it over to all the NFS-Ganesha nodes
# and run ganesha-ha.sh --refresh-config.
EXPORT{
      Export_Id= 2 ;
      Path = "/vol2";
      FSAL {
           name = GLUSTER;
           hostname="localhost";
          volume="vol2";
           }
      Access_type = RW;
      Squash="No_root_squash";
      Pseudo="/vol2";
      Protocols = "3", "4" ;
      Transports = "UDP","TCP";
      SecType = "sys";
     }


Version-Release number of selected component (if applicable):
glusterfs-3.7.0beta2-0.0.el6.x86_64
nfs-ganesha-2.2.0-0.el6.x86_64

How reproducible:
firs time seen this issue


Actual results:
as the coredump mentioned in the descirption section

Expected results:
glusterfsd should not see coredump

Additional info:


Note You need to log in before you can comment on or make changes to this bug.