Bug 1229564

Summary: [RFE] : NFSv4 acl
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Jiffin <jthottan>
Component: nfs-ganeshaAssignee: Jiffin <jthottan>
Status: CLOSED DUPLICATE QA Contact: storage-qa-internal <storage-qa-internal>
Severity: high Docs Contact:
Priority: high    
Version: rhgs-3.1CC: ansubram, jthottan, kkeithle, ndevos, nlevinki, saujain
Target Milestone: ---Keywords: Triaged
Target Release: RHGS 3.1.0   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1215174 Environment:
Last Closed: 2015-06-09 06:49:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1215174    
Bug Blocks: 1229567, 1229569    

Description Jiffin 2015-06-09 05:57:34 UTC
+++ This bug was initially created as a clone of Bug #1215174 +++

Description of problem:

NFSv4 and POSIX ACL

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Saurabh on 2015-05-07 09:01:40 EDT ---

Presently I am facing issues when acls are enabled,
something like where the posix test suite does not move beyond a point. 
Here in this case, the posix test suite gets stuck at chmod 00.t case,
[root@rhsauto005 dir]# time prove -r /opt/qa/tools/pjd-fstest-20080816/tests/
/opt/qa/tools/pjd-fstest-20080816/tests/chflags/00.t ... ok   
/opt/qa/tools/pjd-fstest-20080816/tests/chflags/01.t ... ok   
/opt/qa/tools/pjd-fstest-20080816/tests/chflags/02.t ... ok   
/opt/qa/tools/pjd-fstest-20080816/tests/chflags/03.t ... ok   
/opt/qa/tools/pjd-fstest-20080816/tests/chflags/04.t ... ok   
/opt/qa/tools/pjd-fstest-20080816/tests/chflags/05.t ... ok   
/opt/qa/tools/pjd-fstest-20080816/tests/chflags/06.t ... ok   
/opt/qa/tools/pjd-fstest-20080816/tests/chflags/07.t ... ok   
/opt/qa/tools/pjd-fstest-20080816/tests/chflags/08.t ... ok   
/opt/qa/tools/pjd-fstest-20080816/tests/chflags/09.t ... ok   
/opt/qa/tools/pjd-fstest-20080816/tests/chflags/10.t ... ok   
/opt/qa/tools/pjd-fstest-20080816/tests/chflags/11.t ... ok   
/opt/qa/tools/pjd-fstest-20080816/tests/chflags/12.t ... ok   
/opt/qa/tools/pjd-fstest-20080816/tests/chflags/13.t ... ok   
/opt/qa/tools/pjd-fstest-20080816/tests/chmod/00.t ..... 1/58 

Whereas if I disable acls then the same posix test suite finishes to give result.

Additional info when acls are enabled,

[root@nfs1 ~]# ps -eaf | grep nfs
root     22909     1  0 14:42 ?        00:00:04 /usr/bin/ganesha.nfsd -L /var/log/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT -p /var/run/ganesha.nfsd.pid
root     23307 25794  0 15:06 pts/0    00:00:00 grep nfs
[root@nfs1 ~]# 
[root@nfs1 ~]# 
[root@nfs1 ~]# 
[root@nfs1 ~]# 
[root@nfs1 ~]# service nfs-ganesha status
ganesha.nfsd (pid  22909) is running...

[root@nfs1 ~]# gluster volume status
Status of volume: vol0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.148:/rhs/brick1/d1r1         49152     0          Y       21347
Brick 10.70.37.77:/rhs/brick1/d1r2          49152     0          Y       11829
Brick 10.70.37.76:/rhs/brick1/d2r1          49152     0          Y       32710
Brick 10.70.37.69:/rhs/brick1/d2r2          49152     0          Y       26127
Brick 10.70.37.148:/rhs/brick1/d3r1         49153     0          Y       21364
Brick 10.70.37.77:/rhs/brick1/d3r2          49153     0          Y       11846
Brick 10.70.37.76:/rhs/brick1/d4r1          49153     0          Y       32727
Brick 10.70.37.69:/rhs/brick1/d4r2          49153     0          Y       26144
Brick 10.70.37.148:/rhs/brick1/d5r1         49154     0          Y       21381
Brick 10.70.37.77:/rhs/brick1/d5r2          49154     0          Y       11863
Brick 10.70.37.76:/rhs/brick1/d6r1          49154     0          Y       32744
Brick 10.70.37.69:/rhs/brick1/d6r2          49154     0          Y       26161
Self-heal Daemon on localhost               N/A       N/A        Y       21407
Self-heal Daemon on 10.70.37.76             N/A       N/A        Y       304  
Self-heal Daemon on 10.70.37.77             N/A       N/A        Y       11892
Self-heal Daemon on 10.70.37.69             N/A       N/A        Y       26188
 
Task Status of Volume vol0
------------------------------------------------------------------------------
There are no active volume tasks



[root@nfs1 ~]# cat /etc/ganesha/exports/export.vol0.conf 
# WARNING : Using Gluster CLI will overwrite manual
# changes made to this file. To avoid it, edit the
# file, copy it over to all the NFS-Ganesha nodes
# and run ganesha-ha.sh --refresh-config.
EXPORT{
      Export_Id= 2 ;
      Path = "/vol0";
      FSAL {
           name = GLUSTER;
           hostname="localhost";
          volume="vol0";
           }
      Access_type = RW;
      Squash="No_root_squash";
      Pseudo="/vol0";
      Protocols = "3", "4" ;
      Transports = "UDP","TCP";
      SecType = "sys";
     }

--- Additional comment from Jiffin on 2015-05-21 08:32:17 EDT ---

With latest changes on acl ,there is no more hang issue is present.

Comment 4 Jiffin 2015-06-09 06:49:28 UTC

*** This bug has been marked as a duplicate of bug 1228155 ***