Bug 1796751 - Mount Issue when one of the Brick is down out of 6 bricks
Summary: Mount Issue when one of the Brick is down out of 6 bricks
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: 6
Hardware: x86_64
OS: Linux
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
Depends On:
TreeView+ depends on / blocked
Reported: 2020-01-31 05:50 UTC by Kannan
Modified: 2020-02-25 04:46 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2020-02-25 04:46:05 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)
mount logs (44.65 KB, text/plain)
2020-01-31 05:50 UTC, Kannan
no flags Details

Description Kannan 2020-01-31 05:50:07 UTC
Created attachment 1656636 [details]
mount logs

Description of problem:

We have 6 bricks for 10GB PVC. Each volume has 2 bricks. So we have total 6 bricks. 5 bricks are up and Running. 3 Out of 3 volume Replica, 1 volumes brick was down. Still 5 bricks are UP and Running.
But I could not mount the volume within Pod. Also I could not mount the volume from Host as well. 

Version-Release number of selected component (if applicable):
Heketi - 9.0
Glusterfs - 6.1

Version-Release number of selected component (if applicable):
Heketi - 9.0
Glusterfs - 6.1

How reproducible:

Steps to Reproduce:

Actual results:
Following is the volume status of Glusterfs volume -vol_92b542f70b51e9ce61fae194c3734dc4

[root@chnipc3stg04 /]# gluster volume status vol_92b542f70b51e9ce61fae194c3734dc4
Status of volume: vol_92b542f70b51e9ce61fae194c3734dc4
Gluster process                             TCP Port  RDMA Port  Online  Pid
518a9b1bb44ef58dfd4268ac91375/brick         49176     0          Y       438
35bffea24f555a7254a117abe7cc1/brick         49174     0          Y       410
1fb94a6ce0725397440ee599f2e8c/brick         N/A       N/A        N       N/A
97f566703d5065eb186d25a947d6a/brick         49173     0          Y       447
1f229bdaff0c0059463f388f9e99c/brick         49175     0          Y       417
2ba8c20e8c87e3d2f5ffdb82c4316/brick         49177     0          Y       445
Self-heal Daemon on localhost               N/A       N/A        Y       60413
Self-heal Daemon on chnipc3stg06.cluster.lo
cal                                         N/A       N/A        Y       126726
Self-heal Daemon on             N/A       N/A        Y       23251
Self-heal Daemon on             N/A       N/A        Y       463
Self-heal Daemon on             N/A       N/A        Y       113620
Self-heal Daemon on             N/A       N/A        Y       114827

Task Status of Volume vol_92b542f70b51e9ce61fae194c3734dc4
Task                 : Rebalance
ID                   : 123311d5-816e-4803-91b3-538979dc3e3c
Status               : completed

I have checked, on the node and still volume is present. Also the PID mentioned on the volume status is still running. Also the port mentioned on the "TCP PORT" is still listening within Glusterfs Pod.
But I could mount the volume from outside. Also from pod , we could not mount the volume.

Expected results:
Volume should be mounted if 2 Node is available out of 3.

Additional info:

I have attached all the volumes Brick logs.
Also I have attached the mount logs

Comment 1 Mohit Agrawal 2020-01-31 06:31:35 UTC
It is is a known issue and the issue will fix after merge this patch

Comment 2 Kannan 2020-01-31 06:40:30 UTC
Thanks @Mohit...
When will it be available for us to use this patch?
Which Glusterfs release will have this patch?

Comment 3 Mohit Agrawal 2020-02-25 04:46:05 UTC
Patch is merged in release-6.

Note You need to log in before you can comment on or make changes to this bug.