Bug 1453977 - Brick Multiplexing: Deleting brick directories of the base volume must gracefully detach from glusterfsd without impacting other volumes IO(currently seeing transport end point error)
Summary: Brick Multiplexing: Deleting brick directories of the base volume must gracef...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ---
Assignee: Mohit Agrawal
QA Contact:
URL:
Whiteboard: brick-multiplexing
Depends On:
Blocks: 1451598 1458113
TreeView+ depends on / blocked
 
Reported: 2017-05-22 11:39 UTC by Mohit Agrawal
Modified: 2018-03-24 07:20 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.12.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1451598
Environment:
Last Closed: 2017-09-05 17:31:32 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1447390 0 unspecified CLOSED Brick Multiplexing :- .trashcan not able to heal after replace brick 2021-02-22 00:41:40 UTC

Internal Links: 1447390

Comment 1 Worker Ant 2017-05-22 11:59:31 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#1) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 2 Worker Ant 2017-05-22 12:17:29 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#2) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 3 Worker Ant 2017-05-23 08:54:28 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#3) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 4 Worker Ant 2017-05-24 05:58:39 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#4) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 5 Worker Ant 2017-05-25 16:20:24 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#5) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 6 Worker Ant 2017-05-26 01:48:17 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#6) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 7 Worker Ant 2017-05-26 02:10:27 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#7) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 8 Worker Ant 2017-05-26 10:48:08 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#8) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 9 Worker Ant 2017-05-26 10:50:56 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#9) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 10 Worker Ant 2017-05-26 11:34:08 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#10) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 11 Worker Ant 2017-05-28 08:22:30 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#11) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 12 Worker Ant 2017-05-29 15:47:18 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#12) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 13 Worker Ant 2017-05-30 02:01:52 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#13) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 14 Worker Ant 2017-05-30 02:15:46 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#14) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 15 Worker Ant 2017-05-30 10:00:47 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#15) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 16 Shyamsundar 2017-05-30 13:16:33 UTC
Removed this as blocking 3.11:
  - This is not a regression, but an existing bug in the code
  - As a result, this need not block 3.11 release, which is set to be tagged today (30th May) and the patch is not ready
  - When the 3.11.1 tracker is open, this would be a good backport to have in the same, and tracked against that bug instead

Comment 17 Worker Ant 2017-05-31 11:41:12 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#16) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 18 Worker Ant 2017-05-31 13:56:50 UTC
REVIEW: https://review.gluster.org/17356 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#17) for review on master by MOHIT AGRAWAL (moagrawa)

Comment 19 Worker Ant 2017-05-31 20:43:57 UTC
COMMIT: https://review.gluster.org/17356 committed in master by Jeff Darcy (jeff.us) 
------
commit dba55ae364a2772904bb68a6bd0ea87289ee1470
Author: Mohit Agrawal <moagrawa>
Date:   Thu May 25 21:43:42 2017 +0530

    glusterfs: Not able to mount running volume after enable brick mux and stopped any volume
    
    Problem: After enabled brick mux if any volume has down and then try ot run mount
             with running volume , mount command is hung.
    
    Solution: After enable brick mux server has shared one data structure server_conf
              for all associated subvolumes.After down any subvolume in some
              ungraceful manner (remove brick directory) posix xlator sends
              GF_EVENT_CHILD_DOWN event to parent xlatros and server notify
              updates the child_up to false in server_conf.When client is trying
              to communicate with server through mount it checks conf->child_up
              and it is FALSE so it throws message "translator are not yet ready".
              From this patch updated structure server_conf to save child_up status
              for xlator wise. Another improtant correction from this patch is
              cleanup threads from server side xlators after stop the volume.
    
    BUG: 1453977
    Change-Id: Ic54da3f01881b7c9429ce92cc569236eb1d43e0d
    Signed-off-by: Mohit Agrawal <moagrawa>
    Reviewed-on: https://review.gluster.org/17356
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Reviewed-by: Raghavendra Talur <rtalur>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Jeff Darcy <jeff.us>

Comment 20 Worker Ant 2017-06-04 13:27:54 UTC
REVIEW: https://review.gluster.org/17458 (glusterfs: Not able to mount running volume after enable brick mux and stopped any volume) posted (#1) for review on release-3.11 by Atin Mukherjee (amukherj)

Comment 21 Shyamsundar 2017-09-05 17:31:32 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report.

glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.