This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 762502 - (GLUSTER-770) NFS Xlator - Crash when both GlusterFS server/NFS Server are in the same file
NFS Xlator - Crash when both GlusterFS server/NFS Server are in the same file
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: nfs (Show other bugs)
nfs-alpha
All Linux
low Severity low
: ---
: ---
Assigned To: Shehjar Tikoo
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-03-26 08:40 EDT by Anush Shetty
Modified: 2010-08-12 05:58 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: ---
Regression: RTA
Mount Type: nfs
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Anush Shetty 2010-03-26 08:40:29 EDT
NFS Server crashes when both GlusterFS server and NFS Server volumes are put in the same vol file.

Vol file:

volume posix1
  type storage/posix
  option directory /tmp/export1
end-volume

volume server-tcp
  type protocol/server
  option transport-type tcp
  option auth.addr.posix1.allow *
  option transport.socket.listen-port 6996
  subvolumes posix1
end-volume

volume 192.168.1.127-1
  type protocol/client
  option transport-type tcp
  option remote-host localhost
  option remote-port 6996
  option remote-subvolume posix1
end-volume

volume nfsd
  type nfs/server
  subvolumes 192.168.1.127-1
  option rpc-auth.addr.allow *
end-volume

Log:
time of crash: 2010-03-26 18:06:22
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.0git
/lib/libc.so.6[0x7faac1947530]
/lib/libpthread.so.0(pthread_spin_lock+0x0)[0x7faac1c8eeb0]
/gluster/gnfs/lib/libglusterfs.so.0(mem_get+0x1a)[0x7faac20d425a]
/gluster/gnfs/lib/glusterfs/3.0.0git/xlator/nfs/server.so(nfs_fop_local_init+0x57)[0x7faabe753fa0]
/gluster/gnfs/lib/glusterfs/3.0.0git/xlator/nfs/server.so(nfs_fop_lookup+0x114)[0x7faabe7544b5]
/gluster/gnfs/lib/glusterfs/3.0.0git/xlator/nfs/server.so(nfs_startup_subvolume+0x211)[0x7faabe752066]
/gluster/gnfs/lib/glusterfs/3.0.0git/xlator/nfs/server.so(notify+0x12b)[0x7faabe752bd7]
/gluster/gnfs/lib/libglusterfs.so.0(xlator_notify+0x43)[0x7faac20b8c83]
/gluster/gnfs/lib/glusterfs/3.0.0git/xlator/protocol/client.so(protocol_client_post_handshake+0x112)[0x7faabe9a2a42]
/gluster/gnfs/lib/glusterfs/3.0.0git/xlator/protocol/client.so(client_setvolume_cbk+0x193)[0x7faabe9a2bf3]
/gluster/gnfs/lib/glusterfs/3.0.0git/xlator/protocol/client.so(protocol_client_pollin+0xca)[0x7faabe991fea]
/gluster/gnfs/lib/glusterfs/3.0.0git/xlator/protocol/client.so(notify+0xe8)[0x7faabe998858]
/gluster/gnfs/lib/libglusterfs.so.0(xlator_notify+0x43)[0x7faac20b8c83]
/gluster/gnfs/lib/glusterfs/3.0.0git/transport/socket.so(socket_event_handler+0xc8)[0x7faabe223578]
/gluster/gnfs/lib/libglusterfs.so.0[0x7faac20d35ad]
/gluster/gnfs/sbin/glusterfs(main+0x862)[0x404562]
/lib/libc.so.6(__libc_start_main+0xfd)[0x7faac1932abd]
/gluster/gnfs/sbin/glusterfs[0x402b19]


gdb backtrace:
(gdb) bt
#0  pthread_spin_lock (lock=0x40000400018) at ../nptl/sysdeps/x86_64/../i386/pthread_spin_lock.c:35
#1  0x00007faac20d425a in mem_get (mem_pool=0x40000400000) at mem-pool.c:87
#2  0x00007faabe753fa0 in nfs_fop_local_init (xl=0x21b2830) at nfs-fops.c:54
#3  0x00007faabe7544b5 in nfs_fop_lookup (xl=0x21b2830, nfu=0x7fff80805020, loc=0x7fff80805070, cbk=0x7faabe751d66 <nfs_start_subvol_lookup_cbk>, 
    local=0x21b3f30) at nfs-fops.c:271
#4  0x00007faabe752066 in nfs_startup_subvolume (nfs=0x21b3f30, xl=0x21b2830) at nfs.c:268
#5  0x00007faabe752bd7 in notify (this=0x21b32e0, event=5, data=0x21b2830) at nfs.c:520
#6  0x00007faac20b8c83 in xlator_notify (xl=0x21b32e0, event=5, data=0x21b2830) at xlator.c:923
#7  0x00007faabe9a2a42 in protocol_client_post_handshake (frame=<value optimized out>, this=0x21b2830) at client-protocol.c:6127
#8  0x00007faabe9a2bf3 in client_setvolume_cbk (frame=0x7faab8002de8, hdr=0x0, hdrlen=<value optimized out>, iobuf=<value optimized out>)
    at client-protocol.c:6254
#9  0x00007faabe991fea in protocol_client_pollin (this=0x21b2830, trans=0x21b4930) at client-protocol.c:6827
#10 0x00007faabe998858 in notify (this=0x40000400018, event=<value optimized out>, data=0x21b4930) at client-protocol.c:6946
#11 0x00007faac20b8c83 in xlator_notify (xl=0x21b2830, event=2, data=0x21b4930) at xlator.c:923
#12 0x00007faabe223578 in socket_event_handler (fd=<value optimized out>, idx=1, data=0x21b4930, poll_in=1, poll_out=0, poll_err=<value optimized out>)
    at socket.c:831
#13 0x00007faac20d35ad in event_dispatch_epoll_handler (event_pool=0x21ab330) at event.c:804
#14 event_dispatch_epoll (event_pool=0x21ab330) at event.c:867
#15 0x0000000000404562 in main (argc=<value optimized out>, argv=<value optimized out>) at glusterfsd.c:1415
Comment 1 Anand Avati 2010-04-02 04:14:35 EDT
PATCH: http://patches.gluster.com/patch/3080 in master (nfs: Redesign fop argument passing to support single volfile use)
Comment 2 Anand Avati 2010-04-20 01:51:12 EDT
PATCH: http://patches.gluster.com/patch/3150 in master (nfs: Remove reference to top)
Comment 3 Shehjar Tikoo 2010-05-31 08:59:08 EDT
Regression Test:
The problem occurs because of a bug in NFS code path that causes a crash during glusterfsd process startup.


Test Case:
Very simple, just use the volume file shown in the report and start up the glusterfsd process. If it starts-up correctly, Wallah!

Note You need to log in before you can comment on or make changes to this bug.