Bug 1624796

Summary: mkdir -p fails with "No data available" when root-squash is enabled
Product: [Community] GlusterFS Reporter: Susant Kumar Palai <spalai>
Component: distributeAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: mainlineCC: amukherj, atumball, bkunal, jthottan, moagrawa, nbalacha, nravinas, rabhat, rgowdapp, rhinduja, rhs-bugs, sankarshan, slenzen, spalai, storage-qa-internal, vbellur
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1597507 Environment:
Last Closed: 2019-03-25 16:30:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1597507    
Bug Blocks:    

Comment 1 Worker Ant 2018-09-03 12:01:13 UTC
REVIEW: https://review.gluster.org/21062 (dht: Operate internal fops with negative pid) posted (#1) for review on master by Susant Palai

Comment 2 Worker Ant 2018-09-20 15:43:55 UTC
COMMIT: https://review.gluster.org/21062 committed in master by "N Balachandran" <nbalacha> with a commit message- dht: Operate internal fops with negative pid

With root-squash on, all root credentials are converted to a
random uid, gid(65535). And ideally this does not carry the necessary
permission bits to carry out the operation. But posix-acl will allow
operations from this inode as long as its ctx has the ngroup information
and ngroup has the owner group information.

The problem we ran into recently was somehow posix-acl xlator did not
cache the ngroup info and some of the dht internal fops(layout setxattr)
failed with root-squash enabled.

DHT internal fops now use a negative pid to pretend that the operation
is from an internal client so posix-acl allows them to pass

Change-Id: I5bb8d068389bf4c94629d668a16015a95ccb53ab
fixes: bz#1624796
Signed-off-by: Susant Palai <spalai>

Comment 3 Shyamsundar 2019-03-25 16:30:33 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 4 slenzen 2020-03-06 12:36:54 UTC
*** Bug 1757178 has been marked as a duplicate of this bug. ***