Bug 1473967 - storage.owner-uid and owner-gid parameters not working
Summary: storage.owner-uid and owner-gid parameters not working
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: GlusterFS
Classification: Community
Component: fuse
Version: 3.8
Hardware: x86_64
OS: Linux
low
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-07-22 20:19 UTC by hostingnuggets
Modified: 2018-03-22 00:13 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-10-02 14:15:28 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description hostingnuggets 2017-07-22 20:19:13 UTC
Description of problem:
I want the root directory of my volume to belong to UID/GID 1000 and not root so I used the storage.owner-uid/owner-gid parameters for that purpose on my volume but somehow the volume mounted on client with fuse simply reverts back to root:root on its own.

Version-Release number of selected component (if applicable):
3.8.11

How reproducible:
always

Steps to Reproduce:
1. create a replica 3 (with arbiter volume) volume
2. set the following two parameters:
gluster volume set myvol storage.owner-uid 1000
gluster volume set myvol storage.owner-gid 1000
3. mount the volume using fuse on a client

Actual results:
root directory of mounted volume does first have owner UID/GID set to 1000 but after a few minutes it changes to root:root

Expected results:
root directory of mounted volume should always keep 1000:1000 as UID/GID and not revert automatically to root:root

Additional info:
see my post on the glusterfs users mailing list with unfortunately no answers: http://lists.gluster.org/pipermail/gluster-users/2017-July/031838.html

Comment 1 Csaba Henk 2017-08-03 19:54:23 UTC
Hi,

can you please unmount & stop the volume, set TRACE log level both on bricks and client as described in

https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/administration_guide/configuring_the_log_level

and then start up the volume and replay the reproduction steps?

Comment 2 Csaba Henk 2017-09-12 17:54:50 UTC
Putting prio to low until requested info cometh.

Comment 3 hostingnuggets 2017-10-01 14:04:07 UTC
False alarm, I found out that it was actually puppet resetting the owner on the root directory of my mount. I apologize for the wrongly opened bug, you can of course close it immediately as it is not a bug in GlusterFS. Thank you.

Comment 4 Csaba Henk 2017-10-02 14:15:28 UTC
Thanks for the update!

Comment 5 Eric 2018-03-22 00:13:52 UTC
Stopping and starting the volume after setting `storage.owner-gid` and `storage.owner-uid` worked perfectly


Note You need to log in before you can comment on or make changes to this bug.