Description of problem: I am having acls enabled for nfs-ganesha. I executed the pynfs test and after that I tried to delete the data created by pynfs test. Deletion of data was attempted using the command "rm -rf *" Then I saw the nfs-ganesha process crashes for all nodes. Version-Release number of selected component (if applicable): nfs-ganesha-2.2.0-3.el6rhs.x86_64 glusterfs-3.7.1-4.el6rhs.x86_64 How reproducible: always Steps to Reproduce: 1. create a volume of type 6x2, start it 2. configure nfs-ganesha and enable it 3. export the volume using ganesha.enable command 4. execute pynfs. 5. delete the data created in step 5 using "rm -rf *" Actual results: step 5 result is that ganesha process crashes. (gdb) bt #0 0x0000003d45e32625 in raise () from /lib64/libc.so.6 #1 0x0000003d45e33e05 in abort () from /lib64/libc.so.6 #2 0x0000003d45e70537 in __libc_message () from /lib64/libc.so.6 #3 0x0000003d45e75f4e in malloc_printerr () from /lib64/libc.so.6 #4 0x0000003d45e78ca0 in _int_free () from /lib64/libc.so.6 #5 0x00000000004fbf51 in gsh_free () #6 0x00000000004fc31d in nfs4_ace_free () #7 0x00000000004fc6f7 in nfs4_acl_new_entry () #8 0x00007fe23309d649 in posix_acl_2_fsal_acl_for_dir () from /usr/lib64/ganesha/libfsalgluster.so.4.2.0 #9 0x00007fe233097c92 in glusterfs_get_acl () from /usr/lib64/ganesha/libfsalgluster.so.4.2.0 #10 0x00007fe233094ec3 in getattrs () from /usr/lib64/ganesha/libfsalgluster.so.4.2.0 #11 0x00000000004dbdd1 in cache_inode_refresh_attrs () #12 0x00000000004defcf in cache_inode_lock_trust_attrs () #13 0x00000000004cd448 in cache_inode_getattr () #14 0x00000000004d1d5b in cache_inode_readdir () #15 0x000000000047b684 in nfs4_op_readdir () #16 0x000000000045fde9 in nfs4_Compound () #17 0x000000000045497d in nfs_rpc_execute () #18 0x0000000000455606 in worker_run () #19 0x000000000050d78a in fridgethr_start_routine () #20 0x0000003d46207a51 in start_thread () from /lib64/libpthread.so.0 #21 0x0000003d45ee896d in clone () from /lib64/libc.so.6 [root@nfs11 ~]# pcs status Cluster name: reaper Last updated: Wed Jun 24 00:35:42 2015 Last change: Wed Jun 24 00:14:03 2015 Stack: cman Current DC: nfs11 - partition with quorum Version: 1.1.11-97629de 4 Nodes configured 20 Resources configured Online: [ nfs11 nfs12 nfs13 nfs14 ] Full list of resources: Clone Set: nfs-mon-clone [nfs-mon] Started: [ nfs11 nfs12 nfs13 nfs14 ] Clone Set: nfs-grace-clone [nfs-grace] Started: [ nfs11 nfs12 nfs13 nfs14 ] nfs11-cluster_ip-1 (ocf::heartbeat:IPaddr): Stopped nfs11-trigger_ip-1 (ocf::heartbeat:Dummy): Started nfs12 nfs12-cluster_ip-1 (ocf::heartbeat:IPaddr): Stopped nfs12-trigger_ip-1 (ocf::heartbeat:Dummy): Started nfs11 nfs13-cluster_ip-1 (ocf::heartbeat:IPaddr): Stopped nfs13-trigger_ip-1 (ocf::heartbeat:Dummy): Started nfs13 nfs14-cluster_ip-1 (ocf::heartbeat:IPaddr): Stopped nfs14-trigger_ip-1 (ocf::heartbeat:Dummy): Started nfs14 nfs11-dead_ip-1 (ocf::heartbeat:Dummy): Started nfs11 nfs14-dead_ip-1 (ocf::heartbeat:Dummy): Started nfs14 nfs13-dead_ip-1 (ocf::heartbeat:Dummy): Started nfs13 nfs12-dead_ip-1 (ocf::heartbeat:Dummy): Started nfs12 Expected results: data deletion should be successful, irrespective of the fact that acl is disabled or enabled. There should not be any crash duing data creation for nfsd. Additional info:
Created attachment 1042342 [details] coredump of nfsd
Created attachment 1042343 [details] nfs11 ganesha-gfapi.log
Created attachment 1042344 [details] brick logs
Normal deletion using rm -rf * on the mount is not crashing in my setup when acl is enabled.
Send out patch in upstream https://review.gerrithub.io/#/c/237701/
Root cause for this bug is that, inherited acl is also called for file type which is not directory(crash is seen for block type file). The crash is fixed in above mentioned patch
I did the similar steps as mentioned in description section for verifying the BZ and found that nfs-ganesha has not crashed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html