Bug 782385

Summary: when glusterfs native client is mounted with -oacl, posix compliance is failing
Product: [Community] GlusterFS Reporter: Amar Tumballi <amarts>
Component: access-controlAssignee: shishir gowda <sgowda>
Status: CLOSED NOTABUG QA Contact: Anush Shetty <ashetty>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: mainlineCC: gluster-bugs, nsathyan, vraman
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-01-20 11:47:35 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: DP CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Amar Tumballi 2012-01-17 11:07:01 UTC
Description of problem:
as per the subject

Version-Release number of selected component (if applicable):
mainline, git head

How reproducible:
100%

Steps to Reproduce:
1. create and start a distributed replicate volume
2. mount -t glusterfs -oacl localhost:/volname /mnt/glusterfs
3. run posix-compliance suite.
  
Actual results:
/home/del/data/pt/tests/chmod/05.t    (Wstat: 0 Tests: 14 Failed: 1)
  Failed test:  8
/home/del/data/pt/tests/chown/05.t    (Wstat: 0 Tests: 15 Failed: 2)
  Failed tests:  8, 10
/home/del/data/pt/tests/link/02.t     (Wstat: 0 Tests: 10 Failed: 2)
  Failed tests:  4, 6
/home/del/data/pt/tests/link/03.t     (Wstat: 0 Tests: 16 Failed: 2)
  Failed tests:  8-9
/home/del/data/pt/tests/link/06.t     (Wstat: 0 Tests: 18 Failed: 1)
  Failed test:  11
/home/del/data/pt/tests/link/07.t     (Wstat: 0 Tests: 17 Failed: 2)
  Failed tests:  10, 12
/home/del/data/pt/tests/open/05.t     (Wstat: 0 Tests: 12 Failed: 1)
  Failed test:  7
/home/del/data/pt/tests/open/06.t     (Wstat: 0 Tests: 72 Failed: 1)
  Failed test:  69
/home/del/data/pt/tests/truncate/05.t (Wstat: 0 Tests: 15 Failed: 2)
  Failed tests:  8, 10


Expected results:
no failures

Additional info:
I suspect this also happens in 1 brick volume too.

Comment 1 shishir gowda 2012-01-20 11:47:35 UTC
This is a known issue, and the work-around for this is to mount with --entry-timeout=0 option.