Bug 1663821

Summary: Changing permissions on root directory(directory on which volume is mounted) on client node, when a brick is down, causes inconsistent behavior in root directory permissions on client node after the brick is up again.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Sayalee <saraut>
Component: distributeAssignee: Barak Sason Rofman <bsasonro>
Status: CLOSED ERRATA QA Contact: Pranav Prakash <prprakas>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.4CC: bsasonro, pasik, pprakash, prprakas, puebele, rhs-bugs, sheggodu, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.5.z Batch Update 4   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0-50 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 1791285 (view as bug list) Environment:
Last Closed: 2021-04-29 07:20:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1791285    

Description Sayalee 2019-01-07 07:52:47 UTC
Description of problem:
The permissions of the directory on which the volume is mounted, keep on changing to either the old permissions or the updated ones. This happens if permissions are modified on root directory on client node, when a brick is down and then after that the brick is up again.
(This behavior was observed while verifying bug: https://bugzilla.redhat.com/show_bug.cgi?id=1648296)

Version-Release number of selected component (if applicable):
3.12.2-36

How reproducible:
Always
(Was reproduced on 3.12.2-34 build and 3.12.2-32 build as well)

Steps to Reproduce:
1) Create a 4 brick pure distributed volume.
2) Mount it on a client node use FUSE.
3) The default permissions for the dir on which volume is mounted are "755", changed the permissions to "555" using chmod.
4) Performed lookup on client node as well as back-end bricks.(ls -all & stat) 
5) Killed a brick.
6) From client node, changed permissions #chmod 755 /mnt/dir for the dir on which volume is mounted
7) Lookup on client node as well as back-end bricks. (ls -all & stat)
8) From client node, changed permissions #chmod 444 /mnt/dir for the dir on which volume is mounted
9) Lookup on client node as well as back-end bricks. (ls -all & stat)
10) From client node, changed permissions #chmod 755 /mnt/dir for the dir on which volume is mounted
11) Lookup on client node as well as back-end bricks. (ls -all & stat)
12) Brought back the brick using "gluster v start <volname> force
13) Lookup on client node as well as back-end bricks. (ls -all & stat)

Actual results:
* Checked on the client node, did lookup, the permission of the dir on which volume is mounted, is sometimes shown as "555" and sometimes "755".
* On server node, permission of the 4th brick that was killed and brought back up is still "555" and is not updated to "755".

Expected results:
* On client node,the permission of the dir on which volume is mounted(root directory), should be shown as "755".
* On server node, permission of the 4th brick that was killed and brought back up should be updated to "755".

Additional info:
Sosreports will be shared.

Comment 18 errata-xmlrpc 2021-04-29 07:20:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1462