Bug 1179180
| Summary: | When the volume is in stopped state/all the bricks are down mount of the volume hangs | |||
|---|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Pranith Kumar K <pkarampu> | |
| Component: | disperse | Assignee: | Pranith Kumar K <pkarampu> | |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
| Severity: | unspecified | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | mainline | CC: | bugs, gluster-bugs, iesool | |
| Target Milestone: | --- | |||
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-3.7.0 | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1188471 (view as bug list) | Environment: | ||
| Last Closed: | 2015-05-14 17:28:51 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1188471 | |||
REVIEW: http://review.gluster.org/9396 (cluster/ec: Handle CHILD UP/DOWN in all cases) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu) REVIEW: http://review.gluster.org/9396 (cluster/ec: Handle CHILD UP/DOWN in all cases) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu) REVIEW: http://review.gluster.org/9396 (cluster/ec: Handle CHILD UP/DOWN in all cases) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu) REVIEW: http://review.gluster.org/9396 (cluster/ec: Handle CHILD UP/DOWN in all cases) posted (#4) for review on master by Pranith Kumar Karampuri (pkarampu) REVIEW: http://review.gluster.org/9396 (cluster/ec: Handle CHILD UP/DOWN in all cases) posted (#5) for review on master by Pranith Kumar Karampuri (pkarampu) COMMIT: http://review.gluster.org/9396 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit a48b18d6f661f863371e625084a88a01aaf989f0 Author: Pranith Kumar K <pkarampu> Date: Thu Jan 8 15:39:40 2015 +0530 cluster/ec: Handle CHILD UP/DOWN in all cases Problem: When all the bricks are down at the time of mounting the volume, then mount command hangs. Fix: 1. Ignore all CHILD_CONNECTING events comming from subvolumes. 2. On timer expiration (without enough up or down childs) send CHILD_DOWN. 3. Once enough up or down subvolumes are detected, send the appropriate event. When rest of the subvols go up/down without changing the overall ec-up/ec-down send CHILD_MODIFIED to parent subvols. Change-Id: Ie0194dbadef2dce36ab5eb7beece84a6bf3c631c BUG: 1179180 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/9396 Reviewed-by: Xavier Hernandez <xhernandez> Tested-by: Gluster Build System <jenkins.com> REVIEW: http://review.gluster.org/9523 (cluster/ec: Wait for all bricks to notify before notifying parent) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu) REVIEW: http://review.gluster.org/9523 (cluster/ec: Wait for all bricks to notify before notifying parent) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu) REVIEW: http://review.gluster.org/9523 (cluster/ec: Wait for all bricks to notify before notifying parent) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu) REVIEW: http://review.gluster.org/9523 (cluster/ec: Wait for all bricks to notify before notifying parent) posted (#5) for review on master by Vijay Bellur (vbellur) COMMIT: http://review.gluster.org/9523 committed in master by Vijay Bellur (vbellur) ------ commit da1ff66255017501f54c50b3c40eeea11b5fc38f Author: Pranith Kumar K <pkarampu> Date: Sun Feb 1 15:03:46 2015 +0530 cluster/ec: Wait for all bricks to notify before notifying parent This is to prevent spurious heals that can result in self-heal. Change-Id: I0b27c1c1fc7a58e2683cb1ca135117a85efcc6c9 BUG: 1179180 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/9523 Reviewed-by: Xavier Hernandez <xhernandez> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur> Tested-by: Vijay Bellur <vbellur> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |
Description of problem: When all the bricks are down at the time of mounting the volume, then mount command hangs. If only fragment number of bricks are up then mount takes 5 seconds to be successful. root@pranithk-laptop - ~ 17:12:37 :( ⚡ glusterd && gluster volume create ec2 disperse 3 redundancy 1 pranithk-laptop:/home/gfs/ec_{2,3,4} force volume create: ec2: success: please start the volume to access data root@pranithk-laptop - ~ 17:12:42 :) ⚡ mount -t glusterfs pranithk-laptop:/ec2 /mnt/fuse1 ^C root@pranithk-laptop - ~ 17:12:55 :( ⚡ ls /mnt/fuse1 ^C^C^C Command above hung, I had to kill the mount to get the prompt. Version-Release number of selected component (if applicable): How reproducible: always Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: