Description of problem: When only one distribute subvolume is present rmdir failure results in change of the uid/gid of the directory to root:root. This bug exists even for multiple subvolumes of dht. Version-Release number of selected component (if applicable): These were the steps which created the issue for me, I added logs in afr to confirm that setattrs with uid, gid as 0 are coming with valid flags set to everything in stat which should have never come. Create a plain replicate volume 'r2' disable stat-prefetch Create two fuse-mounts with --attribute-timeout=0 and --entry-timeout=0 chown <normal-user>:<normal-user> <mnt> #On my laptop the command was 'chown pk1:pk1 /mnt/r2' On both the mounts execute the following command: while true; do mkdir d1; touch d1/a; rm d1/a; rmdir d1; done How reproducible: Always Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/9435 (cluster/dht: Don't restore entry when only one subvolume is present) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/9435 (cluster/dht: Don't restore entry when only one subvolume is present) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/9435 committed in master by Raghavendra G (rgowdapp) ------ commit 7b58df7965ad557e23681d61164bfc7d609ed2cd Author: Pranith Kumar K <pkarampu> Date: Mon Jan 12 17:05:32 2015 +0530 cluster/dht: Don't restore entry when only one subvolume is present Problem: When rmdir fails with op_errno other than ENOENT/EACCES then self-heal is attempted with zeroed-out stbuf. Only ia_type is filled from inode, when the self-heal progresses, it sees that the directory is still present and performs setattr with all valid flags set to '1' so the file will be owned by root:root and the time goes to epoch Fix: This fixes the problem only in dht with single subvolume. Just don't perform self-heal when there is a single subvolume. Change-Id: I6c85b845105bc6bbe7805a14a48a2c5d7bc0c5b6 BUG: 1181367 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/9435 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Shyamsundar Ranganathan <srangana> Reviewed-by: Raghavendra G <rgowdapp> Tested-by: Raghavendra G <rgowdapp>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user