Bug 1179180 - When the volume is in stopped state/all the bricks are down mount of the volume hangs
Summary: When the volume is in stopped state/all the bricks are down mount of the volu...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1188471
TreeView+ depends on / blocked
 
Reported: 2015-01-06 11:44 UTC by Pranith Kumar K
Modified: 2015-05-14 17:45 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1188471 (view as bug list)
Environment:
Last Closed: 2015-05-14 17:28:51 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Pranith Kumar K 2015-01-06 11:44:40 UTC
Description of problem:
When all the bricks are down at the time of mounting the volume, then mount
command hangs. If only fragment number of bricks are up then mount takes
5 seconds to be successful.

root@pranithk-laptop - ~ 
17:12:37 :( ⚡ glusterd && gluster volume create ec2 disperse 3 redundancy 1 pranithk-laptop:/home/gfs/ec_{2,3,4} force
volume create: ec2: success: please start the volume to access data

root@pranithk-laptop - ~ 
17:12:42 :) ⚡ mount -t glusterfs pranithk-laptop:/ec2 /mnt/fuse1
^C

root@pranithk-laptop - ~ 
17:12:55 :( ⚡ ls /mnt/fuse1



^C^C^C

Command above hung, I had to kill the mount to get the prompt.

Version-Release number of selected component (if applicable):


How reproducible:
always

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Anand Avati 2015-01-06 11:49:19 UTC
REVIEW: http://review.gluster.org/9396 (cluster/ec: Handle CHILD UP/DOWN in all cases) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 2 Anand Avati 2015-01-08 17:20:38 UTC
REVIEW: http://review.gluster.org/9396 (cluster/ec: Handle CHILD UP/DOWN in all cases) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 3 Anand Avati 2015-01-28 09:05:14 UTC
REVIEW: http://review.gluster.org/9396 (cluster/ec: Handle CHILD UP/DOWN in all cases) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 4 Anand Avati 2015-01-28 13:06:45 UTC
REVIEW: http://review.gluster.org/9396 (cluster/ec: Handle CHILD UP/DOWN in all cases) posted (#4) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 5 Anand Avati 2015-01-28 15:53:56 UTC
REVIEW: http://review.gluster.org/9396 (cluster/ec: Handle CHILD UP/DOWN in all cases) posted (#5) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 6 Anand Avati 2015-01-29 03:49:59 UTC
COMMIT: http://review.gluster.org/9396 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit a48b18d6f661f863371e625084a88a01aaf989f0
Author: Pranith Kumar K <pkarampu>
Date:   Thu Jan 8 15:39:40 2015 +0530

    cluster/ec: Handle CHILD UP/DOWN in all cases
    
    Problem:
    When all the bricks are down at the time of mounting the volume, then mount
    command hangs.
    
    Fix:
    1. Ignore all CHILD_CONNECTING events comming from subvolumes.
    2. On timer expiration (without enough up or down childs) send
       CHILD_DOWN.
    3. Once enough up or down subvolumes are detected, send the appropriate event.
       When rest of the subvols go up/down without changing the overall
       ec-up/ec-down send CHILD_MODIFIED to parent subvols.
    
    Change-Id: Ie0194dbadef2dce36ab5eb7beece84a6bf3c631c
    BUG: 1179180
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/9396
    Reviewed-by: Xavier Hernandez <xhernandez>
    Tested-by: Gluster Build System <jenkins.com>

Comment 7 Anand Avati 2015-02-01 16:27:22 UTC
REVIEW: http://review.gluster.org/9523 (cluster/ec: Wait for all bricks to notify before notifying parent) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 8 Anand Avati 2015-02-02 12:29:09 UTC
REVIEW: http://review.gluster.org/9523 (cluster/ec: Wait for all bricks to notify before notifying parent) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 9 Anand Avati 2015-02-02 12:36:41 UTC
REVIEW: http://review.gluster.org/9523 (cluster/ec: Wait for all bricks to notify before notifying parent) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 10 Anand Avati 2015-02-02 20:21:57 UTC
REVIEW: http://review.gluster.org/9523 (cluster/ec: Wait for all bricks to notify before notifying parent) posted (#5) for review on master by Vijay Bellur (vbellur)

Comment 11 Anand Avati 2015-02-02 20:22:15 UTC
COMMIT: http://review.gluster.org/9523 committed in master by Vijay Bellur (vbellur) 
------
commit da1ff66255017501f54c50b3c40eeea11b5fc38f
Author: Pranith Kumar K <pkarampu>
Date:   Sun Feb 1 15:03:46 2015 +0530

    cluster/ec: Wait for all bricks to notify before notifying parent
    
    This is to prevent spurious heals that can result in self-heal.
    
    Change-Id: I0b27c1c1fc7a58e2683cb1ca135117a85efcc6c9
    BUG: 1179180
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/9523
    Reviewed-by: Xavier Hernandez <xhernandez>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>
    Tested-by: Vijay Bellur <vbellur>

Comment 12 Niels de Vos 2015-05-14 17:28:51 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 13 Niels de Vos 2015-05-14 17:35:47 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 14 Niels de Vos 2015-05-14 17:38:09 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 15 Niels de Vos 2015-05-14 17:45:25 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.