Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1473967

Summary: storage.owner-uid and owner-gid parameters not working
Product: [Community] GlusterFS Reporter: hostingnuggets
Component: fuseAssignee: bugs <bugs>
Status: CLOSED NOTABUG QA Contact:
Severity: medium Docs Contact:
Priority: low    
Version: 3.8CC: bugs, csaba, hostingnuggets, naisanza
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-10-02 14:15:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description hostingnuggets 2017-07-22 20:19:13 UTC
Description of problem:
I want the root directory of my volume to belong to UID/GID 1000 and not root so I used the storage.owner-uid/owner-gid parameters for that purpose on my volume but somehow the volume mounted on client with fuse simply reverts back to root:root on its own.

Version-Release number of selected component (if applicable):
3.8.11

How reproducible:
always

Steps to Reproduce:
1. create a replica 3 (with arbiter volume) volume
2. set the following two parameters:
gluster volume set myvol storage.owner-uid 1000
gluster volume set myvol storage.owner-gid 1000
3. mount the volume using fuse on a client

Actual results:
root directory of mounted volume does first have owner UID/GID set to 1000 but after a few minutes it changes to root:root

Expected results:
root directory of mounted volume should always keep 1000:1000 as UID/GID and not revert automatically to root:root

Additional info:
see my post on the glusterfs users mailing list with unfortunately no answers: http://lists.gluster.org/pipermail/gluster-users/2017-July/031838.html

Comment 1 Csaba Henk 2017-08-03 19:54:23 UTC
Hi,

can you please unmount & stop the volume, set TRACE log level both on bricks and client as described in

https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/administration_guide/configuring_the_log_level

and then start up the volume and replay the reproduction steps?

Comment 2 Csaba Henk 2017-09-12 17:54:50 UTC
Putting prio to low until requested info cometh.

Comment 3 hostingnuggets 2017-10-01 14:04:07 UTC
False alarm, I found out that it was actually puppet resetting the owner on the root directory of my mount. I apologize for the wrongly opened bug, you can of course close it immediately as it is not a bug in GlusterFS. Thank you.

Comment 4 Csaba Henk 2017-10-02 14:15:28 UTC
Thanks for the update!

Comment 5 Eric 2018-03-22 00:13:52 UTC
Stopping and starting the volume after setting `storage.owner-gid` and `storage.owner-uid` worked perfectly