Bug 1796751

Summary: Mount Issue when one of the Brick is down out of 6 bricks
Product: [Community] GlusterFS Reporter: Kannan <kannanv06>
Component: coreAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 6CC: bugs, moagrawa, rhs-bugs, vbellur
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-02-25 04:46:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
mount logs none

Description Kannan 2020-01-31 05:50:07 UTC
Created attachment 1656636 [details]
mount logs

Description of problem:

We have 6 bricks for 10GB PVC. Each volume has 2 bricks. So we have total 6 bricks. 5 bricks are up and Running. 3 Out of 3 volume Replica, 1 volumes brick was down. Still 5 bricks are UP and Running.
But I could not mount the volume within Pod. Also I could not mount the volume from Host as well. 

Version-Release number of selected component (if applicable):
Heketi - 9.0
Glusterfs - 6.1


Version-Release number of selected component (if applicable):
Heketi - 9.0
Glusterfs - 6.1

How reproducible:



Steps to Reproduce:
1.
2.
3.

Actual results:
Following is the volume status of Glusterfs volume -vol_92b542f70b51e9ce61fae194c3734dc4

=========
[root@chnipc3stg04 /]# gluster volume status vol_92b542f70b51e9ce61fae194c3734dc4
Status of volume: vol_92b542f70b51e9ce61fae194c3734dc4
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.8.30.244:/var/lib/heketi/mounts/vg
_c7173f0e66d49934e004b58f9951903f/brick_a75
518a9b1bb44ef58dfd4268ac91375/brick         49176     0          Y       438
Brick 10.8.30.245:/var/lib/heketi/mounts/vg
_7b477fb6bfdc692bf3e7b05e93e4d5f4/brick_e8b
35bffea24f555a7254a117abe7cc1/brick         49174     0          Y       410
Brick 10.8.30.246:/var/lib/heketi/mounts/vg
_1b7440bad4ce40de70d16b8e3f141772/brick_d42
1fb94a6ce0725397440ee599f2e8c/brick         N/A       N/A        N       N/A
Brick 10.8.30.246:/var/lib/heketi/mounts/vg
_1b7440bad4ce40de70d16b8e3f141772/brick_e09
97f566703d5065eb186d25a947d6a/brick         49173     0          Y       447
Brick 10.8.30.245:/var/lib/heketi/mounts/vg
_7b477fb6bfdc692bf3e7b05e93e4d5f4/brick_597
1f229bdaff0c0059463f388f9e99c/brick         49175     0          Y       417
Brick 10.8.30.244:/var/lib/heketi/mounts/vg
_c7173f0e66d49934e004b58f9951903f/brick_09b
2ba8c20e8c87e3d2f5ffdb82c4316/brick         49177     0          Y       445
Self-heal Daemon on localhost               N/A       N/A        Y       60413
Self-heal Daemon on chnipc3stg06.cluster.lo
cal                                         N/A       N/A        Y       126726
Self-heal Daemon on 10.8.30.247             N/A       N/A        Y       23251
Self-heal Daemon on 10.8.30.245             N/A       N/A        Y       463
Self-heal Daemon on 10.8.30.242             N/A       N/A        Y       113620
Self-heal Daemon on 10.8.30.243             N/A       N/A        Y       114827

Task Status of Volume vol_92b542f70b51e9ce61fae194c3734dc4
------------------------------------------------------------------------------
Task                 : Rebalance
ID                   : 123311d5-816e-4803-91b3-538979dc3e3c
Status               : completed

=======================
I have checked, on the node 10.8.30.244 and 10.8.30.246 still volume is present. Also the PID mentioned on the volume status is still running. Also the port mentioned on the "TCP PORT" is still listening within Glusterfs Pod.
But I could mount the volume from outside. Also from pod , we could not mount the volume.




Expected results:
Volume should be mounted if 2 Node is available out of 3.


Additional info:

I have attached all the volumes Brick logs.
Also I have attached the mount logs

Comment 1 Mohit Agrawal 2020-01-31 06:31:35 UTC
It is is a known issue and the issue will fix after merge this patch
https://review.gluster.org/24061

Comment 2 Kannan 2020-01-31 06:40:30 UTC
Thanks @Mohit...
When will it be available for us to use this patch?
Which Glusterfs release will have this patch?

Comment 3 Mohit Agrawal 2020-02-25 04:46:05 UTC
Patch is merged in release-6.