Bug 782385 - when glusterfs native client is mounted with -oacl, posix compliance is failing
Summary: when glusterfs native client is mounted with -oacl, posix compliance is failing
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: GlusterFS
Classification: Community
Component: access-control
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
Assignee: shishir gowda
QA Contact: Anush Shetty
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-01-17 11:07 UTC by Amar Tumballi
Modified: 2013-12-19 00:07 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-01-20 11:47:35 UTC
Regression: ---
Mount Type: ---
Documentation: DP
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Amar Tumballi 2012-01-17 11:07:01 UTC
Description of problem:
as per the subject

Version-Release number of selected component (if applicable):
mainline, git head

How reproducible:
100%

Steps to Reproduce:
1. create and start a distributed replicate volume
2. mount -t glusterfs -oacl localhost:/volname /mnt/glusterfs
3. run posix-compliance suite.
  
Actual results:
/home/del/data/pt/tests/chmod/05.t    (Wstat: 0 Tests: 14 Failed: 1)
  Failed test:  8
/home/del/data/pt/tests/chown/05.t    (Wstat: 0 Tests: 15 Failed: 2)
  Failed tests:  8, 10
/home/del/data/pt/tests/link/02.t     (Wstat: 0 Tests: 10 Failed: 2)
  Failed tests:  4, 6
/home/del/data/pt/tests/link/03.t     (Wstat: 0 Tests: 16 Failed: 2)
  Failed tests:  8-9
/home/del/data/pt/tests/link/06.t     (Wstat: 0 Tests: 18 Failed: 1)
  Failed test:  11
/home/del/data/pt/tests/link/07.t     (Wstat: 0 Tests: 17 Failed: 2)
  Failed tests:  10, 12
/home/del/data/pt/tests/open/05.t     (Wstat: 0 Tests: 12 Failed: 1)
  Failed test:  7
/home/del/data/pt/tests/open/06.t     (Wstat: 0 Tests: 72 Failed: 1)
  Failed test:  69
/home/del/data/pt/tests/truncate/05.t (Wstat: 0 Tests: 15 Failed: 2)
  Failed tests:  8, 10


Expected results:
no failures

Additional info:
I suspect this also happens in 1 brick volume too.

Comment 1 shishir gowda 2012-01-20 11:47:35 UTC
This is a known issue, and the work-around for this is to mount with --entry-timeout=0 option.


Note You need to log in before you can comment on or make changes to this bug.