Steps to Reproduce: 1. Create a nfs-ganesha cluster, create 2*(4+2) EC volume, and enable ganesha on it. 2. Set mdcache options. 3. Mount the volume on client. 4. Run posix_compliance test suite Actual results: posix compliance rename test fails. Expected results: All tests in posix compliance should pass prove -vf /opt/qa/tools/posix-testsuite/tests/rename/00.t is the test that keeps failing.
REVIEW: http://review.gluster.org/16390 (cluster/dht: Do rename cleanup as root) posted (#1) for review on release-3.8 by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/16390 committed in release-3.8 by Pranith Kumar Karampuri (pkarampu) ------ commit 5ff6af5c5955bc885925c15e34c9bc0862168c02 Author: Pranith Kumar K <pkarampu> Date: Tue Jan 3 12:50:54 2017 +0530 cluster/dht: Do rename cleanup as root Problem: Rename linkfile cleanup is done as non-root which may not have priviliges to do the rename so it fails with EACCESS. MKDIR on that name in future will start to hole on this subvolume. It is not easy to hit on fuse mounts because vfs takes care of the permission checks even before rename fop is wound. But with nfs-ganesha mounts it happens. Fix: Do rename cleanup as root >BUG: 1409727 >Change-Id: I414c1eb6dce76b4516a6c940557b249e6c3f22f4 >Signed-off-by: Pranith Kumar K <pkarampu> >Reviewed-on: http://review.gluster.org/16317 >Smoke: Gluster Build System <jenkins.org> >CentOS-regression: Gluster Build System <jenkins.org> >Reviewed-by: Raghavendra G <rgowdapp> >Reviewed-by: N Balachandran <nbalacha> >NetBSD-regression: NetBSD Build System <jenkins.org> BUG: 1412913 Change-Id: I7f891034150d7a0e3210202fb0788040c91e1c09 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/16390 Smoke: Gluster Build System <jenkins.org> Reviewed-by: N Balachandran <nbalacha> CentOS-regression: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.9, please open a new bug report. glusterfs-3.8.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2017-February/000066.html [2] https://www.gluster.org/pipermail/gluster-users/