Bug 1549915 - [Fuse Sub-dir] After performing add-brick on volume,doing rm -rf * on subdir mount point fails with "Transport endpoint is not connected"
Summary: [Fuse Sub-dir] After performing add-brick on volume,doing rm -rf * on subdir ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: fuse
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1508999
Blocks: 1475693
TreeView+ depends on / blocked
 
Reported: 2018-02-28 03:39 UTC by Amar Tumballi
Modified: 2018-06-20 18:01 UTC (History)
9 users (show)

Fixed In Version: glusterfs-v4.1.0
Clone Of: 1508999
Environment:
Last Closed: 2018-06-20 18:01:20 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Amar Tumballi 2018-02-28 03:41:06 UTC
Description of problem:

While sub-dir is mounted on client and add-brick is performed,doing rm -rf * on mount point fails to delete the directories present on mount point 



Version-Release number of selected component (if applicable):
master

How reproducible:
2/2

Steps to Reproduce:
1.Create 3 x (2 + 1) = 9 Arbiter volume.
2.Mount the volume on client via Fuse
3.Create a directory say "dir1" inside the mount point
4.Set permissions for the directory on volume 
# gluster v set glustervol auth.allow "/dir1(10.70.37.192)"
volume set: success

5.Mount the sub-dir "dir1" on client.
 mount -t glusterfs node1:glustervol/dir1 /mnt/posix_Parent/

5.Create 1000 directories on mount point

6.Perform add brick 
# gluster v add-brick glustervol node1:/gluster/brick3/3 node2:/gluster/brick3/3 node3:/gluster/brick3/3 
volume add-brick: success

7.After performing add-brick,do rm -rf * on mount point




Actual results:

rm -rf * on mount point results in "Transport endpoint is not connected".Even though the subdir is mounted on client.

rm: cannot remove ‘sd979’: Transport endpoint is not connected
rm: cannot remove ‘sd98’: Transport endpoint is not connected
rm: cannot remove ‘sd980’: Transport endpoint is not connected
rm: cannot remove ‘sd981’: Transport endpoint is not connected
rm: cannot remove ‘sd982’: Transport endpoint is not connected
rm: cannot remove ‘sd983’: Transport endpoint is not connected
rm: cannot remove ‘sd984’: Transport endpoint is not connected
rm: cannot remove ‘sd985’: Transport endpoint is not connected
rm: cannot remove ‘sd986’: Transport endpoint is not connected

Comment 2 Worker Ant 2018-02-28 03:42:56 UTC
REVIEW: https://review.gluster.org/18645 (hooks: add a script to stat the subdirs in add-brick) posted (#4) for review on master by Amar Tumballi

Comment 3 Worker Ant 2018-03-06 14:42:11 UTC
COMMIT: https://review.gluster.org/18645 committed in master by "Atin Mukherjee" <amukherj> with a commit message- hooks: add a script to stat the subdirs in add-brick

The subdirectories are expected to be present for a subdir
mount to be successful. If not, the client_handshake()
itself fails to succeed. When a volume is about to get
mounted first time, this is easier to handle, as if the
directory is not present in one brick, then its mostly
not present in any other brick. In case of add-brick,
the directory is not present in new brick, and there is
no chance of healing it from the subdirectory mount, as
in those clients, the subdir itself will be 'root' ('/')
of the filesystem. Hence we need a volume mount to heal
the directory before connections can succeed.

This patch does take care of that by healing the directories
which are expected to be mounted as subdirectories from the
volume level mount point.

Change-Id: I2c2ac7b7567fe209aaa720006d09b68584d0dd14
BUG: 1549915
Signed-off-by: Amar Tumballi <amarts>

Comment 4 Worker Ant 2018-03-06 16:31:18 UTC
REVIEW: https://review.gluster.org/19682 (hooks: fix workdir in S13create-subdir-mounts.sh) posted (#1) for review on master by Atin Mukherjee

Comment 5 Worker Ant 2018-03-07 07:39:59 UTC
COMMIT: https://review.gluster.org/19682 committed in master by "Atin Mukherjee" <amukherj> with a commit message- hooks: fix workdir in S13create-subdir-mounts.sh

Change-Id: Id3eff498091ad9fa4651e93b66903426e76776d6
BUG: 1549915
Signed-off-by: Atin Mukherjee <amukherj>

Comment 6 Shyamsundar 2018-06-20 18:01:20 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.