Bug 1508999
Summary: | [Fuse Sub-dir] After performing add-brick on volume,doing rm -rf * on subdir mount point fails with "Transport endpoint is not connected" | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Manisha Saini <msaini> | |
Component: | fuse | Assignee: | Amar Tumballi <atumball> | |
Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> | |
Severity: | medium | Docs Contact: | ||
Priority: | high | |||
Version: | rhgs-3.3 | CC: | amukherj, atumball, mchangir, rcyriac, rhinduja, rhs-bugs, sheggodu, srmukher, storage-qa-internal | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.4.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.12.2-5 | Doc Type: | Bug Fix | |
Doc Text: |
'subdir' mounted clients cannot heal the directory structure when an 'add-brick' is performed because distribute layer would not know the parent directories of subdirectory which is mounted, while performing directory self-heal. You can fix this by mounting the volume (without the subdirectory) on one of the server after add-brick, and run self-heal operations on the 'subdir' directories. This is now performed using 'hook' scripts, so that no user intervention is required.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1549915 (view as bug list) | Environment: | ||
Last Closed: | 2018-09-04 06:38:02 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1503134, 1549915 |
Description
Manisha Saini
2017-11-02 16:17:47 UTC
I propose to make this a known issue as the feature is in TP. The steps to resolve this issues are: * After 'add-brick' operation, do a 'stat ${all_subdirs_exported}' on the full volume mount, and then continue the operations in subdir mount-points. Or, * After 'add-brick' operation, run 'rebalance' (even just rebalance fix-layout alone is good enough), and then continue rm -rf operations on subdir mount points. https://review.gluster.org/18645 is a method to fix it.. but the patch needs more review and more testing, doesn't look like we can fix it by 3.3.1 and hence I still recommend this as the 'known issue'. Marking it as POST as the RCA is known, and a patch to automatically handle it is posted upstream. (Note that we may need similar hook script in replace-brick too). DocText Looks fine. Verified this BZ on glusterfs-3.12.2-6.el7rhgs.x86_64 Steps- 1.Create 4*3 dist-replicate volume. 2.Mount the volume on client via FUSE 3.Create 4 dirs inside the mount point 4.Set auth allow permissions on volume #gluster v set Ganeshavol1 auth.allow "/dir1(10.70.46.125),/dir2(10.70.46.20),/dir3(10.70.47.33),/(*)" 5.Mount the subdirs on respective client 6.Perform some IO's(Create directories) 7.Perform add-brick operation on that volume #gluster v add-brick Ganeshavol1 dhcp47-193.lab.eng.blr.redhat.com:/gluster/brick1/new1 dhcp46-116.lab.eng.blr.redhat.com:/gluster/brick1/new1 dhcp46-184.lab.eng.blr.redhat.com:/gluster/brick1/new1 8.Perform rm -rf * from all the mount points Moving this BZ to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607 Made minor changes, and everything looks good now, IMO. |