Bug 1745389 - [Ganesha] Ganesha crashed during mem_get, when failback was performed
Summary: [Ganesha] Ganesha crashed during mem_get, when failback was performed
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Susant Kumar Palai
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks: 1725716 1746324
TreeView+ depends on / blocked
 
Reported: 2019-08-26 02:27 UTC by Manisha Saini
Modified: 2020-01-28 09:53 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-01-28 09:53:51 UTC
Embargoed:


Attachments (Terms of Use)

Description Manisha Saini 2019-08-26 02:27:13 UTC
Description of problem:

4 servers, 1 volume exported and mounted on 3 different clients via v 4.1.
ls -lRt and du -sh was running on 2 clients and linux untar was completed on 3rd client.

While readdir operations were running,rebooted 1 of the server node (gprfs033.sbu.lab.eng.bos.redhat.com) whose vip was used to mount the volume on client.
While failback was running,ganesha got crashed.



Ganesha crashed on node gprfs040.sbu.lab.eng.bos.redhat.com


--------------
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/bin/ganesha.nfsd -L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.c'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007f182c1d0c42 in mem_get () from /lib64/libglusterfs.so.0
Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 dbus-libs-1.10.24-13.el7_6.x86_64 elfutils-libelf-0.176-2.el7.x86_64 elfutils-libs-0.176-2.el7.x86_64 glibc-2.17-292.el7.x86_64 glusterfs-6.0-11.el7rhgs.x86_64 glusterfs-api-6.0-11.el7rhgs.x86_64 glusterfs-client-xlators-6.0-11.el7rhgs.x86_64 glusterfs-libs-6.0-11.el7rhgs.x86_64 gssproxy-0.7.0-26.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-37.el7_6.x86_64 libacl-2.2.51-14.el7.x86_64 libattr-2.4.46-13.el7.x86_64 libblkid-2.23.2-61.el7.x86_64 libcap-2.22-10.el7.x86_64 libcom_err-1.42.9-16.el7.x86_64 libgcc-4.8.5-39.el7.x86_64 libgcrypt-1.5.3-14.el7.x86_64 libgpg-error-1.12-3.el7.x86_64 libnfsidmap-0.25-19.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 libuuid-2.23.2-61.el7.x86_64 libwbclient-4.9.1-6.el7.x86_64 lz4-1.7.5-3.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 samba-client-libs-4.9.1-6.el7.x86_64 systemd-libs-219-67.el7_7.1.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-18.el7.x86_64
(gdb) bt
#0  0x00007f182c1d0c42 in mem_get () from /lib64/libglusterfs.so.0
#1  0x00007f182c196d3c in get_new_data () from /lib64/libglusterfs.so.0
#2  0x00007f182c19924f in data_from_uint32 () from /lib64/libglusterfs.so.0
#3  0x00007f182c19be45 in dict_set_uint32 () from /lib64/libglusterfs.so.0
#4  0x00007f1818bc8383 in dht_set_file_xattr_req () from /usr/lib64/glusterfs/6.0/xlator/cluster/distribute.so
#5  0x00007f1818bc84c4 in dht_do_discover () from /usr/lib64/glusterfs/6.0/xlator/cluster/distribute.so
#6  0x00007f1818bea0ce in dht_lookup () from /usr/lib64/glusterfs/6.0/xlator/cluster/distribute.so
#7  0x00007f1818975b4f in wb_lookup () from /usr/lib64/glusterfs/6.0/xlator/performance/write-behind.so
#8  0x00007f182c23847d in default_lookup () from /lib64/libglusterfs.so.0
#9  0x00007f181854506f in ioc_lookup () from /usr/lib64/glusterfs/6.0/xlator/performance/io-cache.so
#10 0x00007f182c23847d in default_lookup () from /lib64/libglusterfs.so.0
#11 0x00007f17f53e3584 in qr_lookup () from /usr/lib64/glusterfs/6.0/xlator/performance/quick-read.so
#12 0x00007f17b45ed179 in mdc_lookup () from /usr/lib64/glusterfs/6.0/xlator/performance/md-cache.so
#13 0x00007f17b43aa5e8 in io_stats_lookup () from /usr/lib64/glusterfs/6.0/xlator/debug/io-stats.so
#14 0x00007f182c23847d in default_lookup () from /lib64/libglusterfs.so.0
#15 0x00007f17b41937e7 in meta_lookup () from /usr/lib64/glusterfs/6.0/xlator/meta.so
#16 0x00007f182c1e57a5 in syncop_lookup () from /lib64/libglusterfs.so.0
#17 0x00007f182c4b3afc in glfs_h_create_from_handle () from /lib64/libgfapi.so.0
#18 0x00007f182c6c430e in create_handle (export_pub=0x7f17dc005f90, fh_desc=<optimized out>, pub_handle=0x7f1811311f48, attrs_out=0x7f1811311f70)
    at /usr/src/debug/nfs-ganesha-2.7.3/src/FSAL/FSAL_GLUSTER/export.c:239
#19 0x00005568317b2a1f in mdcache_locate_host (fh_desc=0x7f1811312160, export=export@entry=0x7f17dc005c70, entry=entry@entry=0x7f18113120f0, 
    attrs_out=attrs_out@entry=0x0) at /usr/src/debug/nfs-ganesha-2.7.3/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:1055
#20 0x00005568317abcaa in mdcache_create_handle (exp_hdl=0x7f17dc005c70, fh_desc=<optimized out>, handle=0x7f1811312158, attrs_out=0x0)
    at /usr/src/debug/nfs-ganesha-2.7.3/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1578
#21 0x00005568316fae92 in nfs4_mds_putfh (data=data@entry=0x7f1811312720) at /usr/src/debug/nfs-ganesha-2.7.3/src/Protocols/NFS/nfs4_op_putfh.c:211
---Type <return> to continue, or q <return> to quit---
#22 0x00005568316fb3c8 in nfs4_op_putfh (op=0x7f18040012f0, data=0x7f1811312720, resp=0x7f1804001da0)
    at /usr/src/debug/nfs-ganesha-2.7.3/src/Protocols/NFS/nfs4_op_putfh.c:281
#23 0x00005568316e9703 in nfs4_Compound (arg=<optimized out>, req=<optimized out>, res=0x7f1804001bf0)
    at /usr/src/debug/nfs-ganesha-2.7.3/src/Protocols/NFS/nfs4_Compound.c:942
#24 0x00005568316dcb1f in nfs_rpc_process_request (reqdata=0x7f1804001440) at /usr/src/debug/nfs-ganesha-2.7.3/src/MainNFSD/nfs_worker_thread.c:1328
#25 0x00005568316dbfca in nfs_rpc_decode_request (xprt=0x7f1814061310, xdrs=0x7f1804000f80)
    at /usr/src/debug/nfs-ganesha-2.7.3/src/MainNFSD/nfs_rpc_dispatcher_thread.c:1345
#26 0x00007f18334ea62d in svc_rqst_xprt_task () from /lib64/libntirpc.so.1.7
#27 0x00007f18334eab6a in svc_rqst_run_task () from /lib64/libntirpc.so.1.7
#28 0x00007f18334f2c0b in work_pool_thread () from /lib64/libntirpc.so.1.7
#29 0x00007f1831888ea5 in start_thread () from /lib64/libpthread.so.0
#30 0x00007f18311938cd in clone () from /lib64/libc.so.6
------------------------



Version-Release number of selected component (if applicable):
=========================

# rpm -qa | grep ganesha
nfs-ganesha-gluster-2.7.3-7.el7rhgs.x86_64
nfs-ganesha-debuginfo-2.7.3-7.el7rhgs.x86_64
glusterfs-ganesha-6.0-11.el7rhgs.x86_64
nfs-ganesha-2.7.3-7.el7rhgs.x86_64



How reproducible:
================
1/1


Steps to Reproduce:
===================
1.Create 4 node ganesha cluster

2.Create 4 x 3 Distributed-Replicate Volume

3.Export the volume via ganesha

4.Mount the volume on 3 clients via v4.1

5.Run the following workload

Client 1: Linux untars for large dirs (got completed sucessfully)
Client 2: du -sh in loop
Client 3: ls -lRt in loop

6.While readdir operations were running,rebooted 1 of the server node (gprfs033.sbu.lab.eng.bos.redhat.com) whose vip was used to mount the volume on client.

When the node came up,observed "nfs_unblock" was in blocked state in pcs status for node "gprfs040.sbu.lab.eng.bos.redhat.com" (The node on which crash was observed).Due to which failback was not happening.

---------------
Full list of resources:

 Clone Set: nfs_setup-clone [nfs_setup]
     Started: [ gprfs033.sbu.lab.eng.bos.redhat.com gprfs034.sbu.lab.eng.bos.redhat.com gprfs035.sbu.lab.eng.bos.redhat.com gprfs040.sbu.lab.eng.bos.redhat.com ]
 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ gprfs033.sbu.lab.eng.bos.redhat.com gprfs034.sbu.lab.eng.bos.redhat.com gprfs035.sbu.lab.eng.bos.redhat.com gprfs040.sbu.lab.eng.bos.redhat.com ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ gprfs033.sbu.lab.eng.bos.redhat.com gprfs034.sbu.lab.eng.bos.redhat.com gprfs035.sbu.lab.eng.bos.redhat.com ]
     Stopped: [ gprfs040.sbu.lab.eng.bos.redhat.com ]
 Resource Group: gprfs033.sbu.lab.eng.bos.redhat.com-group
     gprfs033.sbu.lab.eng.bos.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started gprfs040.sbu.lab.eng.bos.redhat.com
     gprfs033.sbu.lab.eng.bos.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started gprfs040.sbu.lab.eng.bos.redhat.com
     gprfs033.sbu.lab.eng.bos.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	FAILED gprfs040.sbu.lab.eng.bos.redhat.com (blocked)
 Resource Group: gprfs034.sbu.lab.eng.bos.redhat.com-group
     gprfs034.sbu.lab.eng.bos.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started gprfs034.sbu.lab.eng.bos.redhat.com
     gprfs034.sbu.lab.eng.bos.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started gprfs034.sbu.lab.eng.bos.redhat.com
     gprfs034.sbu.lab.eng.bos.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Started gprfs034.sbu.lab.eng.bos.redhat.com
 Resource Group: gprfs035.sbu.lab.eng.bos.redhat.com-group
     gprfs035.sbu.lab.eng.bos.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started gprfs035.sbu.lab.eng.bos.redhat.com
     gprfs035.sbu.lab.eng.bos.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started gprfs035.sbu.lab.eng.bos.redhat.com
     gprfs035.sbu.lab.eng.bos.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Started gprfs035.sbu.lab.eng.bos.redhat.com
 Resource Group: gprfs040.sbu.lab.eng.bos.redhat.com-group
     gprfs040.sbu.lab.eng.bos.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started gprfs040.sbu.lab.eng.bos.redhat.com
     gprfs040.sbu.lab.eng.bos.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started gprfs040.sbu.lab.eng.bos.redhat.com
     gprfs040.sbu.lab.eng.bos.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	FAILED gprfs040.sbu.lab.eng.bos.redhat.com (blocked)
-------------------


7. Did pcs resource cleanup.

----------
[root@gprfs033 ~]# pcs resource cleanup
Cleaned up all resources on all nodes
Waiting for 2 replies from the CRMd.. OK
----------


pcs status output
---------
# pcs status
Cluster name: ganesha-ha
Stack: corosync
Current DC: gprfs035.sbu.lab.eng.bos.redhat.com (version 1.1.20-5.el7-3c4c782f70) - partition with quorum
Last updated: Sun Aug 25 22:00:21 2019
Last change: Sun Aug 25 22:00:17 2019 by hacluster via crmd on gprfs040.sbu.lab.eng.bos.redhat.com

4 nodes configured
24 resources configured

Online: [ gprfs033.sbu.lab.eng.bos.redhat.com gprfs034.sbu.lab.eng.bos.redhat.com gprfs035.sbu.lab.eng.bos.redhat.com gprfs040.sbu.lab.eng.bos.redhat.com ]

Full list of resources:

 Clone Set: nfs_setup-clone [nfs_setup]
     Started: [ gprfs033.sbu.lab.eng.bos.redhat.com gprfs034.sbu.lab.eng.bos.redhat.com gprfs035.sbu.lab.eng.bos.redhat.com gprfs040.sbu.lab.eng.bos.redhat.com ]
 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ gprfs033.sbu.lab.eng.bos.redhat.com gprfs034.sbu.lab.eng.bos.redhat.com gprfs035.sbu.lab.eng.bos.redhat.com gprfs040.sbu.lab.eng.bos.redhat.com ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ gprfs033.sbu.lab.eng.bos.redhat.com gprfs034.sbu.lab.eng.bos.redhat.com gprfs035.sbu.lab.eng.bos.redhat.com ]
     Stopped: [ gprfs040.sbu.lab.eng.bos.redhat.com ]
 Resource Group: gprfs033.sbu.lab.eng.bos.redhat.com-group
     gprfs033.sbu.lab.eng.bos.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started gprfs033.sbu.lab.eng.bos.redhat.com
     gprfs033.sbu.lab.eng.bos.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started gprfs033.sbu.lab.eng.bos.redhat.com
     gprfs033.sbu.lab.eng.bos.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Started gprfs033.sbu.lab.eng.bos.redhat.com
 Resource Group: gprfs034.sbu.lab.eng.bos.redhat.com-group
     gprfs034.sbu.lab.eng.bos.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started gprfs034.sbu.lab.eng.bos.redhat.com
     gprfs034.sbu.lab.eng.bos.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started gprfs034.sbu.lab.eng.bos.redhat.com
     gprfs034.sbu.lab.eng.bos.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Started gprfs034.sbu.lab.eng.bos.redhat.com
 Resource Group: gprfs035.sbu.lab.eng.bos.redhat.com-group
     gprfs035.sbu.lab.eng.bos.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started gprfs035.sbu.lab.eng.bos.redhat.com
     gprfs035.sbu.lab.eng.bos.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started gprfs035.sbu.lab.eng.bos.redhat.com
     gprfs035.sbu.lab.eng.bos.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Started gprfs035.sbu.lab.eng.bos.redhat.com
 Resource Group: gprfs040.sbu.lab.eng.bos.redhat.com-group
     gprfs040.sbu.lab.eng.bos.redhat.com-nfs_block	(ocf::heartbeat:portblock):	Started gprfs035.sbu.lab.eng.bos.redhat.com
     gprfs040.sbu.lab.eng.bos.redhat.com-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started gprfs035.sbu.lab.eng.bos.redhat.com
     gprfs040.sbu.lab.eng.bos.redhat.com-nfs_unblock	(ocf::heartbeat:portblock):	Started gprfs035.sbu.lab.eng.bos.redhat.com

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
---------------


Actual results:
================

Post step 7, observed ganesha was crashed on one of the node during mem_get when failback was running


Expected results:
=================
No crash should be observe


Additional info:


Note You need to log in before you can comment on or make changes to this bug.