Description of problem: While sub-dir is mounted on client and add-brick is performed,doing rm -rf * on mount point fails to delete the directories present on mount point Version-Release number of selected component (if applicable): master How reproducible: 2/2 Steps to Reproduce: 1.Create 3 x (2 + 1) = 9 Arbiter volume. 2.Mount the volume on client via Fuse 3.Create a directory say "dir1" inside the mount point 4.Set permissions for the directory on volume # gluster v set glustervol auth.allow "/dir1(10.70.37.192)" volume set: success 5.Mount the sub-dir "dir1" on client. mount -t glusterfs node1:glustervol/dir1 /mnt/posix_Parent/ 5.Create 1000 directories on mount point 6.Perform add brick # gluster v add-brick glustervol node1:/gluster/brick3/3 node2:/gluster/brick3/3 node3:/gluster/brick3/3 volume add-brick: success 7.After performing add-brick,do rm -rf * on mount point Actual results: rm -rf * on mount point results in "Transport endpoint is not connected".Even though the subdir is mounted on client. rm: cannot remove ‘sd979’: Transport endpoint is not connected rm: cannot remove ‘sd98’: Transport endpoint is not connected rm: cannot remove ‘sd980’: Transport endpoint is not connected rm: cannot remove ‘sd981’: Transport endpoint is not connected rm: cannot remove ‘sd982’: Transport endpoint is not connected rm: cannot remove ‘sd983’: Transport endpoint is not connected rm: cannot remove ‘sd984’: Transport endpoint is not connected rm: cannot remove ‘sd985’: Transport endpoint is not connected rm: cannot remove ‘sd986’: Transport endpoint is not connected
REVIEW: https://review.gluster.org/18645 (hooks: add a script to stat the subdirs in add-brick) posted (#4) for review on master by Amar Tumballi
COMMIT: https://review.gluster.org/18645 committed in master by "Atin Mukherjee" <amukherj> with a commit message- hooks: add a script to stat the subdirs in add-brick The subdirectories are expected to be present for a subdir mount to be successful. If not, the client_handshake() itself fails to succeed. When a volume is about to get mounted first time, this is easier to handle, as if the directory is not present in one brick, then its mostly not present in any other brick. In case of add-brick, the directory is not present in new brick, and there is no chance of healing it from the subdirectory mount, as in those clients, the subdir itself will be 'root' ('/') of the filesystem. Hence we need a volume mount to heal the directory before connections can succeed. This patch does take care of that by healing the directories which are expected to be mounted as subdirectories from the volume level mount point. Change-Id: I2c2ac7b7567fe209aaa720006d09b68584d0dd14 BUG: 1549915 Signed-off-by: Amar Tumballi <amarts>
REVIEW: https://review.gluster.org/19682 (hooks: fix workdir in S13create-subdir-mounts.sh) posted (#1) for review on master by Atin Mukherjee
COMMIT: https://review.gluster.org/19682 committed in master by "Atin Mukherjee" <amukherj> with a commit message- hooks: fix workdir in S13create-subdir-mounts.sh Change-Id: Id3eff498091ad9fa4651e93b66903426e76776d6 BUG: 1549915 Signed-off-by: Atin Mukherjee <amukherj>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report. glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html [2] https://www.gluster.org/pipermail/gluster-users/