Bug 1379665 - Ganesha crashes while removing files from clients.
Summary: Ganesha crashes while removing files from clients.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: nfs-ganesha
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.2.0
Assignee: Soumya Koduri
QA Contact: Arthy Loganathan
URL:
Whiteboard:
Depends On: 1374015 1375564
Blocks: 1351528
TreeView+ depends on / blocked
 
Reported: 2016-09-27 11:41 UTC by Shashank Raj
Modified: 2017-03-23 06:23 UTC (History)
10 users (show)

Fixed In Version: nfs-ganesha-2.4.1-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1374015
Environment:
Last Closed: 2017-03-23 06:23:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:0493 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.2.0 nfs-ganesha bug fix and enhancement update 2017-03-23 09:19:13 UTC

Description Shashank Raj 2016-09-27 11:41:18 UTC
+++ This bug was initially created as a clone of Bug #1374015 +++

Description of problem:

Ganesha crashes while removing files from clients.

Version-Release number of selected component (if applicable):

[root@dhcp43-116 ~]# rpm -qa|grep glusterfs
glusterfs-geo-replication-3.8.3-0.6.git7956718.el7.centos.x86_64
glusterfs-api-3.8.3-0.6.git7956718.el7.centos.x86_64
glusterfs-fuse-3.8.3-0.6.git7956718.el7.centos.x86_64
glusterfs-server-3.8.3-0.6.git7956718.el7.centos.x86_64
glusterfs-libs-3.8.3-0.6.git7956718.el7.centos.x86_64
glusterfs-client-xlators-3.8.3-0.6.git7956718.el7.centos.x86_64
glusterfs-ganesha-3.8.3-0.6.git7956718.el7.centos.x86_64
glusterfs-cli-3.8.3-0.6.git7956718.el7.centos.x86_64
glusterfs-debuginfo-3.8.3-0.6.git7956718.el7.centos.x86_64
glusterfs-3.8.3-0.6.git7956718.el7.centos.x86_64

[root@dhcp43-116 ~]# rpm -qa|grep ganesha
nfs-ganesha-gluster-next.20160827.7641daf-1.el7.centos.x86_64
glusterfs-ganesha-3.8.3-0.6.git7956718.el7.centos.x86_64
nfs-ganesha-debuginfo-next.20160827.7641daf-1.el7.centos.x86_64
nfs-ganesha-next.20160827.7641daf-1.el7.centos.x86_64


How reproducible:

Twice

Steps to Reproduce:
1.Create large number of files on a dist-rep volume via v4 ganesha mount from 2 different clients.
2.Start removing different set of files simultaneously from both the clients. 
3.Observe that while removal is in progress, ganesha crashes on the mounted node with below bt:

(gdb) bt
#0  0x00007f937af59c5f in __inode_ctx_free (inode=inode@entry=0x7f9356c9fe24)
    at inode.c:332
#1  0x00007f937af5ae42 in __inode_destroy (inode=0x7f9356c9fe24) at inode.c:353
#2  inode_table_prune (table=table@entry=0x7f9360103f30) at inode.c:1543
#3  0x00007f937af5b124 in inode_unref (inode=0x7f9356c9fe24) at inode.c:524
#4  0x00007f937b232216 in pub_glfs_h_close (object=0x7f925400f660)
    at glfs-handleops.c:1365
#5  0x00007f937b64a929 in handle_release (obj_hdl=0x7f92540308f8)
    at /usr/src/debug/nfs-ganesha/src/FSAL/FSAL_GLUSTER/handle.c:70
#6  0x00007f937fd1caf4 in mdcache_lru_clean (entry=0x7f924cd8f060)
    at /usr/src/debug/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:421
#7  mdcache_lru_get (entry=entry@entry=0x7f92e94e1a70)
    at /usr/src/debug/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:1229
#8  0x00007f937fd268b6 in mdcache_alloc_handle (fs=0x0, 
    sub_handle=0x7f924802e358, export=0x7f9380e0fab0)
    at /usr/src/debug/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:117
#9  mdcache_new_entry (export=export@entry=0x7f9380e0fab0, 
---Type <return> to continue, or q <return> to quit---
    sub_handle=0x7f924802e358, attrs_in=attrs_in@entry=0x7f92e94e1bd0, 
    attrs_out=attrs_out@entry=0x0, new_directory=new_directory@entry=false, 
    entry=entry@entry=0x7f92e94e1b30, state=state@entry=0x0)
    at /usr/src/debug/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:367
#10 0x00007f937fd208e4 in mdcache_alloc_and_check_handle (
    export=export@entry=0x7f9380e0fab0, sub_handle=<optimized out>, 
    new_obj=new_obj@entry=0x7f92e94e1bc8, 
    new_directory=new_directory@entry=false, 
    attrs_in=attrs_in@entry=0x7f92e94e1bd0, attrs_out=attrs_out@entry=0x0, 
    tag=tag@entry=0x7f937fd58b10 "lookup ", parent=parent@entry=0x7f9380e783c0, 
    name=name@entry=0x7f924e2d5e4c "def26337", invalidate=invalidate@entry=true, 
    state=state@entry=0x0)
    at /usr/src/debug/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:93
#11 0x00007f937fd27986 in mdc_lookup_uncached (
    mdc_parent=mdc_parent@entry=0x7f9380e783c0, 
    name=name@entry=0x7f924e2d5e4c "def26337", 
    new_entry=new_entry@entry=0x7f92e94e1d40, attrs_out=attrs_out@entry=0x0)
    at /usr/src/debug/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_hel---Type <return> to continue, or q <return> to quit---
pers.c:981
#12 0x00007f937fd1f53f in mdcache_readdir (dir_hdl=0x7f9380e783f8, 
    whence=<optimized out>, dir_state=0x7f92e94e1dc0, 
    cb=0x7f937fc4bc00 <populate_dirent>, attrmask=<optimized out>, 
    eod_met=0x7f92e94e1e8b)
    at /usr/src/debug/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:626
#13 0x00007f937fc4d97d in fsal_readdir (directory=directory@entry=0x7f9380e783f8, 
    cookie=cookie@entry=4053976218433744092, 
    nbfound=nbfound@entry=0x7f92e94e1e8c, eod_met=eod_met@entry=0x7f92e94e1e8b, 
    attrmask=122830, cb=cb@entry=0x7f937fc88d50 <nfs4_readdir_callback>, 
    opaque=opaque@entry=0x7f92e94e1e90)
    at /usr/src/debug/nfs-ganesha/src/FSAL/fsal_helper.c:1457
#14 0x00007f937fc89d1b in nfs4_op_readdir (op=0x7f92a40219d0, 
    data=0x7f92e94e20b0, resp=0x7f9248025080)
    at /usr/src/debug/nfs-ganesha/src/Protocols/NFS/nfs4_op_readdir.c:631
#15 0x00007f937fc765bf in nfs4_Compound (arg=<optimized out>, 
    req=<optimized out>, res=0x7f924802cb50)
    at /usr/src/debug/nfs-ganesha/src/Protocols/NFS/nfs4_Compound.c:734
#16 0x00007f937fc65c0c in nfs_rpc_execute (reqdata=reqdata@entry=0x7f92a4013dd0)
---Type <return> to continue, or q <return> to quit---
    at /usr/src/debug/nfs-ganesha/src/MainNFSD/nfs_worker_thread.c:1281
#17 0x00007f937fc674bd in worker_run (ctx=0x7f9380ecf9e0)
    at /usr/src/debug/nfs-ganesha/src/MainNFSD/nfs_worker_thread.c:1548
#18 0x00007f937fcfb629 in fridgethr_start_routine (arg=0x7f9380ecf9e0)
    at /usr/src/debug/nfs-ganesha/src/support/fridgethr.c:550
#19 0x00007f937e1d2dc5 in start_thread () from /lib64/libpthread.so.0
#20 0x00007f937d8a01cd in clone () from /lib64/libc.so.6

Actual results:

Ganesha crashes on one of the nodes while files are being rmeoved from 2 clients.

Expected results:

There should not be any crash.

Additional info:

There is another bug filed for the ganesha crash seen during removal of files (https://bugzilla.redhat.com/show_bug.cgi?id=1373262), but in this case bt is different, so new bug is being filed.

--- Additional comment from Niels de Vos on 2016-09-12 01:39:03 EDT ---

All 3.8.x bugs are now reported against version 3.8 (without .x). For more information, see http://www.gluster.org/pipermail/gluster-devel/2016-September/050859.html

--- Additional comment from Shashank Raj on 2016-09-16 02:18:17 EDT ---

With the private build:

[root@dhcp43-116 ~]# rpm -qa|grep ganesha
glusterfs-ganesha-3.8.3-0.6.git7956718.el7.centos.x86_64
nfs-ganesha-gluster-2.4-0.rc4.el7.centos.x86_64
nfs-ganesha-debuginfo-2.4-0.rc4.el7.centos.x86_64
nfs-ganesha-2.4-0.rc4.el7.centos.x86_64

ganesha crashes with Segfault with below bt while removing files from 2 mount points:

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fe9d6f8d700 (LWP 8977)]
0x00007fea3ed2dc5f in __inode_ctx_free (inode=inode@entry=0x7fea1dbd022c)
    at inode.c:332
332	                                xl->cbks->forget (xl, inode);
(gdb) bt
#0  0x00007fea3ed2dc5f in __inode_ctx_free (inode=inode@entry=0x7fea1dbd022c)
    at inode.c:332
#1  0x00007fea3ed2ee42 in __inode_destroy (inode=0x7fea1dbd022c) at inode.c:353
#2  inode_table_prune (table=table@entry=0x7fea24002890) at inode.c:1543
#3  0x00007fea3ed2f124 in inode_unref (inode=0x7fea1dbd022c) at inode.c:524
#4  0x00007fea3ed1e222 in loc_wipe (loc=loc@entry=0x7fe9d6f8b210) at xlator.c:695
#5  0x00007fea3f001b4b in glfs_resolve_component (fs=fs@entry=0x1d60e90, 
    subvol=subvol@entry=0x7fea2402aa40, parent=parent@entry=0x7fea1d77906c, 
    component=component@entry=0x7fea100028c0 "def70703", 
    iatt=iatt@entry=0x7fe9d6f8b3d0, force_lookup=<optimized out>)
    at glfs-resolve.c:368
#6  0x00007fea3f002133 in priv_glfs_resolve_at (fs=fs@entry=0x1d60e90, 
    subvol=subvol@entry=0x7fea2402aa40, at=at@entry=0x7fea1d77906c, 
    origpath=origpath@entry=0x7fe91e53bccc "def70703", 
    loc=loc@entry=0x7fe9d6f8b4d0, iatt=iatt@entry=0x7fe9d6f8b510, 
    follow=follow@entry=0, reval=reval@entry=0) at glfs-resolve.c:417
#7  0x00007fea3f003a78 in pub_glfs_h_lookupat (fs=0x1d60e90, 
    parent=<optimized out>, path=0x7fe91e53bccc "def70703", stat=0x7fe9d6f8b630, 
    follow=0) at glfs-handleops.c:102
#8  0x00007fea3f41e02c in lookup (parent=0x1e32e28, 
---Type <return> to continue, or q <return> to quit---
    path=0x7fe91e53bccc "def70703", handle=0x7fe9d6f8b840, 
    attrs_out=0x7fe9d6f8b760)
    at /usr/src/debug/nfs-ganesha-2.4-rc4-0.1.1-Source/FSAL/FSAL_GLUSTER/handle.c:112
#9  0x0000000000537358 in mdc_lookup_uncached (mdc_parent=0x1e2f340, 
    name=0x7fe91e53bccc "def70703", new_entry=0x7fe9d6f8b8d8, attrs_out=0x0)
    at /usr/src/debug/nfs-ganesha-2.4-rc4-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:968
#10 0x000000000052dd26 in mdcache_readdir (dir_hdl=0x1e2f378, 
    whence=0x7fe9d6f8b970, dir_state=0x7fe9d6f8b980, 
    cb=0x43184b <populate_dirent>, attrmask=0, eod_met=0x7fe9d6f8be7b)
    at /usr/src/debug/nfs-ganesha-2.4-rc4-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:626
#11 0x00000000004320b9 in fsal_readdir (directory=0x1e2f378, 
    cookie=52889545390524074, nbfound=0x7fe9d6f8be7c, eod_met=0x7fe9d6f8be7b, 
    attrmask=0, cb=0x48ff13 <nfs3_readdir_callback>, opaque=0x7fe9d6f8be30)
    at /usr/src/debug/nfs-ganesha-2.4-rc4-0.1.1-Source/FSAL/fsal_helper.c:1457
#12 0x000000000048fcfa in nfs3_readdir (arg=0x7fe978001468, req=0x7fe9780012a8, 
    res=0x7fea10002dd0)
    at /usr/src/debug/nfs-ganesha-2.4-rc4-0.1.1-Source/Protocols/NFS/nfs3_readdir.c---Type <return> to continue, or q <return> to quit---
:295
#13 0x000000000044ad6b in nfs_rpc_execute (reqdata=0x7fe978001280)
    at /usr/src/debug/nfs-ganesha-2.4-rc4-0.1.1-Source/MainNFSD/nfs_worker_thread.c:1281
#14 0x000000000044b625 in worker_run (ctx=0x1e7d560)
    at /usr/src/debug/nfs-ganesha-2.4-rc4-0.1.1-Source/MainNFSD/nfs_worker_thread.c:1548
#15 0x000000000050079f in fridgethr_start_routine (arg=0x1e7d560)
    at /usr/src/debug/nfs-ganesha-2.4-rc4-0.1.1-Source/support/fridgethr.c:550
#16 0x00007fea41d9edc5 in start_thread () from /lib64/libpthread.so.0
#17 0x00007fea4145e1cd in clone () from /lib64/libc.so.6
(gdb)

--- Additional comment from Soumya Koduri on 2016-09-16 09:16:02 EDT ---

This looks a bit similar (at least top bt) to the issue reported in bug1353561. The bug seems to be in gluster sources but not related to nfs-ganesha.

Comment 2 Soumya Koduri 2016-10-11 08:46:56 UTC
(gdb) bt
#0  0x00007f6bef4126ff in __inode_ctx_free (inode=0x7f6bce160da0) at inode.c:332
#1  0x00007f6bef414b55 in __inode_destroy (table=<value optimized out>)
    at inode.c:353
#2  inode_table_prune (table=<value optimized out>) at inode.c:1543
#3  0x00007f6bef4153dc in inode_unref (inode=0x7f6bce160da0) at inode.c:524
#4  0x00007f6bef6e8066 in pub_glfs_h_close (object=0x7f6af4add550)
    at glfs-handleops.c:1365
#5  0x00007f6bef8fa524 in handle_release (obj_hdl=0x7f6af4b62ba8)
    at /usr/src/debug/nfs-ganesha-2.4.0/src/FSAL/FSAL_GLUSTER/handle.c:71
#6  0x00000000004e21e6 in mdcache_lru_clean (entry=0x7f6b5a15ab48)
    at /usr/src/debug/nfs-ganesha-2.4.0/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:421
#7  mdcache_lru_get (entry=0x7f6b5a15ab48)
    at /usr/src/debug/nfs-ganesha-2.4.0/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:1201
#8  0x00000000004ecbf1 in mdcache_alloc_handle (export=0xe035f0, 
    sub_handle=0x7f6af4b68fe8, attrs_in=0x7f6b5a15ac60, attrs_out=0x0, 
    new_directory=false, entry=0x7f6b5a15abc8, state=0x0)
    at /usr/src/debug/nfs-ganesha-2.4.0/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:117
#9  mdcache_new_entry (export=0xe035f0, sub_handle=0x7f6af4b68fe8, 
    attrs_in=0x7f6b5a15ac60, attrs_out=0x0, new_directory=false, 
    entry=0x7f6b5a15abc8, state=0x0)
    at /usr/src/debug/nfs-ganesha-2.4.0/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:371
#10 0x00000000004e521e in mdcache_alloc_and_check_handle (
    export=<value optimized out>, sub_handle=<value optimized out>, 
    new_obj=0x7f6b5a15ad30, new_directory=<value optimized out>, 
    attrs_in=<value optimized out>, attrs_out=0x0, tag=0x51f862 "lookup ", 
    parent=0xe02c50, name=0x7f6addf7157c "def79973", invalidate=true, state=0x0)
    at /usr/src/debug/nfs-ganesha-2.4.0/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:93
#11 0x00000000004ec215 in mdc_lookup_uncached (mdc_parent=0xe02c50, 
    name=0x7f6addf7157c "def79973", new_entry=0x7f6b5a15add8, 
    attrs_out=<value optimized out>)
    at /usr/src/debug/nfs-ganesha-2.4.0/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:986
#12 0x00000000004e45a6 in mdcache_readdir (dir_hdl=0xe02c88, 
    whence=<value optimized out>, dir_state=0x7f6b5a15ae50, 
    cb=0x42bca0 <populate_dirent>, attrmask=<value optimized out>, 
    eod_met=0x7f6b5a15b06f)
    at /usr/src/debug/nfs-ganesha-2.4.0/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcac---Type <return> to continue, or q <return> to quit---q
Quit
(gdb) p *(xlator_t *)(long)inode->_ctx[0]
$7 = {name = 0x7f6af41a4118 "", type = 0x7f6ae4206978 "(\300\026\310j\177", 
  instance_name = 0x0, next = 0x7561c0, prev = 0x4e4c40, parents = 0x4e3e50, 
  children = 0x4e3b00, options = 0x4e36e0, dlhandle = 0x4e4bf0, fops = 0x4e4320, 
  cbks = 0x4e6990, dumpops = 0x4e6550, volume_options = {next = 0x4e6100, 
    prev = 0x4e5cc0}, fini = 0x4e3e60 <mdcache_readlink>, 
  init = 0x4e4300 <mdcache_test_access>, 
  reconfigure = 0x4e7710 <mdcache_getattrs>, 
  mem_acct_init = 0x4e7380 <mdcache_setattrs>, notify = 0x4e3b70 <mdcache_link>, 
  loglevel = 5125088, latencies = {{min = 5131344, max = 5133440, 
      total = 2.5431179326767319e-317, std = 2.5424618134990547e-317, 
      mean = 2.5416317832140414e-317, count = 5148272}, {min = 5148048, 
      max = 5147024, total = 2.5423669528950532e-317, 
      std = 2.5416792135160422e-317, mean = 2.5417266438180429e-317, 
      count = 5146832}, {min = 5144608, max = 5144704, 
      total = 2.5418610296737118e-317, std = 2.5448333285990927e-317, 
      mean = 2.5448807589010935e-317, count = 5151088}, {min = 5150992, 
      max = 5151184, total = 2.5450704801090965e-317, 
      std = 2.5451179104110973e-317, mean = 2.545165340713098e-317, 
      count = 5125184}, {min = 5125280, max = 5125376, 
      total = 2.5323196339212256e-317, std = 2.5323670642232264e-317, 
      mean = 2.5324144945252271e-317, count = 5125760}, {min = 5151568, 
      max = 5151664, total = 2.5453076316191003e-317, 
      std = 2.5453550619211011e-317, mean = 2.5436950013510745e-317, 
      count = 5144880}, {min = 5144976, max = 5146608, 
      total = 2.5433471791364022e-317, std = 2.5426515347070578e-317, 
      mean = 2.5422799973413851e-317, count = 5145456}, {min = 5146192, 
      max = 5145072, total = 2.5397899064863453e-317, 
      std = 2.5420507508817148e-317, mean = 0, count = 0}, {min = 0, max = 0, 
      total = 0, std = 0, mean = 0, count = 1}, {min = 67, max = 249090, 
      total = -3.1019956711109088e-195, std = 6.9216917256078698e-310, 
      mean = 6.9217287135458876e-310, count = 1433582}, {min = 1, max = 0, 
      total = 3.3102398271363518e-322, std = 1.230668117225961e-318, mean = 0, 
      count = 10920380356516740874}, {min = 4294967716, max = 0, total = 0, 
      std = 0, mean = 0, count = 1475853366}, {min = 705328000, max = 0, 
      total = 0, std = 7.2916844643976761e-315, mean = 4.1254480637239053e-315, 
      count = 1475853366}, {min = 705329000, max = 1475853366, 
      total = 4.1254480637239053e-315, std = 0, mean = 5.1870226870842929e-210, 
      count = 0}, {min = 60, max = 140097749877984, 
      total = 6.9217485337872854e-310, std = 6.9216851455341886e-310, 
      mean = 7.5076972142905743e+160, count = 140101559541008}, {
      min = 140097348841264, max = 32, total = 0, std = 1.4821969375237396e-323, 
      mean = 7.2917110253667966e-315, count = 1475858742}, {min = 0, max = 0, 
---Type <return> to continue, or q <return> to quit---q
totalQuit
(gdb) p *(xlator_t *)(long)inode->_ctx[1]
$8 = {name = 0x7f6bd4016af0 "ozone-dht", 
  type = 0x7f6bd401dc50 "cluster/distribute", instance_name = 0x0, 
  next = 0x7f6bd401c0d0, prev = 0x7f6bd401e900, parents = 0x7f6bd401f580, 
  children = 0x7f6bd401dcc0, options = 0x7f6be97dbddc, dlhandle = 0x7f6bd401dd10, 
  fops = 0x7f6be0fcd3e0, cbks = 0x7f6be0fcd780, dumpops = 0x7f6be0fcd720, 
  volume_options = {next = 0x7f6bd401e3b0, prev = 0x7f6bd401e3b0}, 
  fini = 0x7f6be0db2760 <dht_fini>, init = 0x7f6be0db3160 <dht_init>, 
  reconfigure = 0x7f6be0db2020 <dht_reconfigure>, 
  mem_acct_init = 0x7f6be0db2670 <mem_acct_init>, 
  notify = 0x7f6be0d70110 <dht_notify>, loglevel = GF_LOG_NONE, latencies = {{
      min = 0, max = 0, total = 0, std = 0, mean = 0, 
      count = 0} <repeats 55 times>}, history = 0x0, ctx = 0xd352e0, 
  graph = 0x7f6bd4003790, itable = 0x0, init_succeeded = 1 '\001', 
  private = 0x7f6bd4058710, mem_acct = 0x7f6bd4054fe0, winds = 0, 
  switched = 0 '\000', local_pool = 0x7f6bd4059170, is_autoloaded = _gf_false}
(gdb) p *(xlator_t *)(long)inode->_ctx[2]
$9 = {name = 0x7f6bd401dbe0 "ozone-io-cache", 
  type = 0x7f6bd40237b0 "performance/io-cache", instance_name = 0x0, 
  next = 0x7f6bd4021670, prev = 0x7f6bd40242a0, parents = 0x7f6bd4024f20, 
  children = 0x7f6bd4023820, options = 0x7f6be97dc08c, dlhandle = 0x7f6bd40238c0, 
  fops = 0x7f6be071f020, cbks = 0x7f6be071f3c0, dumpops = 0x7f6be071f360, 
  volume_options = {next = 0x7f6bd40241d0, prev = 0x7f6bd40241d0}, 
  fini = 0x7f6be0510480 <fini>, init = 0x7f6be0516d20 <init>, 
  reconfigure = 0x7f6be0517210 <reconfigure>, 
  mem_acct_init = 0x7f6be0510ac0 <mem_acct_init>, 
  notify = 0x7f6bef47ffc0 <default_notify>, loglevel = GF_LOG_NONE, latencies = {{
      min = 0, max = 0, total = 0, std = 0, mean = 0, 
      count = 0} <repeats 55 times>}, history = 0x0, ctx = 0xd352e0, 
  graph = 0x7f6bd4003790, itable = 0x0, init_succeeded = 1 '\001', 
  private = 0x7f6bd403f7f0, mem_acct = 0x7f6bd403c4d0, winds = 0, 
  switched = 0 '\000', local_pool = 0x7f6bd403f960, is_autoloaded = _gf_false}
(gdb) p *(xlator_t *)(long)inode->_ctx[3]
$10 = {name = 0x7f6bd4024e40 "ozone-quick-read", 
  type = 0x7f6bd4024eb0 "performance/quick-read", instance_name = 0x0, 
  next = 0x7f6bd4022c10, prev = 0x7f6bd40257b0, parents = 0x7f6bd4026bb0, 
  children = 0x7f6bd4025750, options = 0x7f6be97dc138, dlhandle = 0x7f6bd4024fc0, 
  fops = 0x7f6be050b020, cbks = 0x7f6be050b360, dumpops = 0x7f6be050b3a0, 
  volume_options = {next = 0x7f6bd4025680, prev = 0x7f6bd4025680}, 
  fini = 0x7f6be0305c90 <fini>, init = 0x7f6be0309200 <init>, 
  reconfigure = 0x7f6be0305fc0 <reconfigure>, 
  mem_acct_init = 0x7f6be03061b0 <mem_acct_init>, 
  notify = 0x7f6bef47ffc0 <default_notify>, loglevel = GF_LOG_NONE, latencies = {{
      min = 0, max = 0, total = 0, std = 0, mean = 0, 
      count = 0} <repeats 55 times>}, history = 0x0, ctx = 0xd352e0, 
  graph = 0x7f6bd4003790, itable = 0x0, init_succeeded = 1 '\001', 
  private = 0x7f6bd403c3f0, mem_acct = 0x7f6bd4039170, winds = 0, 
  switched = 0 '\000', local_pool = 0x0, is_autoloaded = _gf_false}
(gdb) p *(xlator_t *)(long)inode->_ctx[4]
$11 = {name = 0x7f6bd4022a30 "ozone-md-cache", 
  type = 0x7f6bd4027880 "performance/md-cache", instance_name = 0x0, 
  next = 0x7f6bd40257b0, prev = 0x7f6bd4028130, parents = 0x7f6bd4029460, 
  children = 0x7f6bd40278f0, options = 0x7f6be97dc290, dlhandle = 0x7f6bd4027990, 
  fops = 0x7f6bd3ffe000, cbks = 0x7f6bd3ffe340, dumpops = 0x0, volume_options = {
    next = 0x7f6bd4028000, prev = 0x7f6bd4028000}, fini = 0x7f6bd3deed60 <fini>, 
  init = 0x7f6bd3deedb0 <init>, reconfigure = 0x7f6bd3deefc0 <reconfigure>, 
  mem_acct_init = 0x7f6bd3deefb0 <mem_acct_init>, 
  notify = 0x7f6bef47ffc0 <default_notify>, loglevel = GF_LOG_NONE, latencies = {{
      min = 0, max = 0, total = 0, std = 0, mean = 0, 
      count = 0} <repeats 55 times>}, history = 0x0, ctx = 0xd352e0, 
  graph = 0x7f6bd4003790, itable = 0x0, init_succeeded = 1 '\001', 
  private = 0x7f6bd4036080, mem_acct = 0x7f6bd4032f90, winds = 0, 
  switched = 0 '\000', local_pool = 0x0, is_autoloaded = _gf_false}
(gdb) p *(xlator_t *)(long)inode->_ctx[5]
$12 = {name = 0xdc6da0 "gfapi", type = 0xdc6e00 "mount/api", instance_name = 0x0, 
  next = 0x7f6bd40297f0, prev = 0x0, parents = 0x0, children = 0x0, 
  options = 0x7f6bec3f506c, dlhandle = 0xdc6ee0, fops = 0x7f6be2cc30c0, 
  cbks = 0x7f6be2cc3000, dumpops = 0x7f6be2cc3060, volume_options = {
    next = 0xdc8a80, prev = 0xdea930}, fini = 0x7f6be2ac1a90 <fini>, 
  init = 0x7f6be2ac1a80 <init>, reconfigure = 0, 
  mem_acct_init = 0x7f6be2ac1ad0 <mem_acct_init>, 
  notify = 0x7f6be2ac1c50 <notify>, loglevel = GF_LOG_NONE, latencies = {{
      min = 0, max = 0, total = 0, std = 0, mean = 0, 
      count = 0} <repeats 55 times>}, history = 0x0, ctx = 0xd352e0, graph = 0x0, 
  itable = 0x0, init_succeeded = 1 '\001', private = 0xd35150, 
  mem_acct = 0xdc8b20, winds = 0, switched = 0 '\000', local_pool = 0x0, 
  is_autoloaded = _gf_false}
(gdb) p *(xlator_t *)(long)inode->_ctx[6]
Cannot access memory at address 0x0


Except for inode->ctx[0], every other ctx set is still intact.

Comment 5 Soumya Koduri 2016-11-04 11:54:58 UTC
Since there have been quite some fixes which went in wrt to refcount, could you please check if this issue still exists with the latest builds available (gluster and nfs-ganesha) .

Comment 8 Arthy Loganathan 2016-11-17 05:50:15 UTC
Tried creating and removing ~200000 files from 2 clients. Issue is not seen in the latest build,

glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.1-1.el7rhgs.x86_64

Comment 10 Arthy Loganathan 2016-11-17 08:47:16 UTC
Issue not seen with the latest build, hence moving the bug to verified state.

Comment 12 errata-xmlrpc 2017-03-23 06:23:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2017-0493.html


Note You need to log in before you can comment on or make changes to this bug.