Bug 1181367 - rmdir changes permission of directory when rmdir fails with ENOTEMPTY
Summary: rmdir changes permission of directory when rmdir fails with ENOTEMPTY
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-01-13 02:12 UTC by Pranith Kumar K
Modified: 2015-05-14 17:45 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-05-14 17:28:58 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Pranith Kumar K 2015-01-13 02:12:18 UTC
Description of problem:
When only one distribute subvolume is present rmdir failure results in change of the uid/gid of the directory to root:root. This bug exists even for multiple subvolumes of dht.

Version-Release number of selected component (if applicable):
These were the steps which created the issue for me, I added logs in afr to confirm that setattrs with uid, gid as 0 are coming with valid flags set to everything in stat which should have never come.

Create a plain replicate volume 'r2'
disable stat-prefetch
Create two fuse-mounts with --attribute-timeout=0 and --entry-timeout=0
chown <normal-user>:<normal-user> <mnt> #On my laptop the command was 'chown pk1:pk1 /mnt/r2'
On both the mounts execute the following command:
while true; do mkdir d1; touch d1/a; rm d1/a; rmdir d1; done

How reproducible:
Always

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Anand Avati 2015-01-13 02:12:39 UTC
REVIEW: http://review.gluster.org/9435 (cluster/dht: Don't restore entry when only one subvolume is present) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 2 Anand Avati 2015-01-13 02:19:32 UTC
REVIEW: http://review.gluster.org/9435 (cluster/dht: Don't restore entry when only one subvolume is present) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 3 Anand Avati 2015-01-19 05:11:52 UTC
COMMIT: http://review.gluster.org/9435 committed in master by Raghavendra G (rgowdapp) 
------
commit 7b58df7965ad557e23681d61164bfc7d609ed2cd
Author: Pranith Kumar K <pkarampu>
Date:   Mon Jan 12 17:05:32 2015 +0530

    cluster/dht: Don't restore entry when only one subvolume is present
    
    Problem:
    When rmdir fails with op_errno other than ENOENT/EACCES then self-heal
    is attempted with zeroed-out stbuf. Only ia_type is filled from inode,
    when the self-heal progresses, it sees that the directory is still
    present and performs setattr with all valid flags set to '1' so the
    file will be owned by root:root and the time goes to epoch
    
    Fix:
    This fixes the problem only in dht with single subvolume. Just don't
    perform self-heal when there is a single subvolume.
    
    Change-Id: I6c85b845105bc6bbe7805a14a48a2c5d7bc0c5b6
    BUG: 1181367
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/9435
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Shyamsundar Ranganathan <srangana>
    Reviewed-by: Raghavendra G <rgowdapp>
    Tested-by: Raghavendra G <rgowdapp>

Comment 4 Niels de Vos 2015-05-14 17:28:58 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Niels de Vos 2015-05-14 17:35:48 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:38:10 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 7 Niels de Vos 2015-05-14 17:45:36 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.