Bug 1571620

Summary: [Tracker-RHGS-BZ#1631329] arbiter brick is not getting unmounted
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Nitin Goyal <nigoyal>
Component: rhgs-server-containerAssignee: Saravanakumar <sarumuga>
Status: CLOSED DUPLICATE QA Contact: Nitin Goyal <nigoyal>
Severity: medium Docs Contact:
Priority: unspecified    
Version: rhgs-3.3CC: amukherj, hchiramm, jmulligan, kramdoss, madam, nberry, nigoyal, pprakash, rhs-bugs, rtalur, sankarshan, sselvan, storage-qa-internal
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-04-04 05:14:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1631329    
Bug Blocks: 1534953, 1641685, 1641915    
Attachments:
Description Flags
it contains all log files and df output of pod where brick is not unmounted none

Description Nitin Goyal 2018-04-25 08:41:25 UTC
Description of problem: arbiter brick is not getting unmounted. It is still mounted in gluster pod. 


Version-Release number of selected component (if applicable): 6.0.0-11


How reproducible:


Steps to Reproduce:
1. create a pvc of 2 gb.
2. mount it on two clients.
3. write files into that volume from both the clients parallely until volume is not getting full.
4. when volume is volume and you are not able to write files in that vol delete the pvc.

Actual results: arbiter brick is still mounted


Expected results: arbiter brick should get unmounted.

Comment 2 Nitin Goyal 2018-04-25 09:49:44 UTC
Created attachment 1426507 [details]
it contains all log files and df output of pod where brick is not unmounted

Comment 3 Nitin Goyal 2018-05-02 12:31:18 UTC
It is not easy to reproducible it is a random behaviour.

Comment 4 John Mulligan 2018-05-15 17:51:19 UTC
Does this only happen when you are using arbiter volumes? If you follow the exact same procedures with non-arbiter replica 3 can you get a similar result?

Comment 5 Michael Adam 2018-05-15 19:48:33 UTC
This might be the same root cause as BZ #1565977.

- do you actually kill the client pods (unmount the clients)
- does the PVC delete operation (seem to) succeed?
- as john said: does it also happen with replica-3?

Comment 8 Nitin Goyal 2018-05-16 05:31:32 UTC
There is one link in the attachment in first line which contain all the logs.

Comment 10 Nitin Goyal 2018-05-16 10:36:39 UTC
(In reply to Michael Adam from comment #5)
> This might be the same root cause as BZ #1565977.
> 
> - do you actually kill the client pods (unmount the clients)
No i did not unmount that volume. I was performing some I/O operations.

> - does the PVC delete operation (seem to) succeed?
Yes PVC delete operation was success.

> - as john said: does it also happen with replica-3?
I have not tried it with replica-3.

Comment 29 Sri Vignesh Selvan 2019-01-23 07:27:29 UTC
Tried reproducing with latest 3.11.1 builds
glusterfs-api-3.12.2-32.el7rhgs.x86_64
glusterfs-cli-3.12.2-32.el7rhgs.x86_64
python2-gluster-3.12.2-32.el7rhgs.x86_64
glusterfs-fuse-3.12.2-32.el7rhgs.x86_64
glusterfs-geo-replication-3.12.2-32.el7rhgs.x86_64
glusterfs-libs-3.12.2-32.el7rhgs.x86_64
glusterfs-3.12.2-32.el7rhgs.x86_64
glusterfs-client-xlators-3.12.2-32.el7rhgs.x86_64
glusterfs-server-3.12.2-32.el7rhgs.x86_64
gluster-block-0.2.1-30.el7rhgs.x86_64
heketi-client-8.0.0-8.el7rhgs.x86_64

1. created 100 volumes, parallely deleted all created 100 volumes.
   Tried with replica and arbiter volume types. as this issue was seen in replica and arbiter
   volume types.
2. checked topology info and oc rsh <gluster pods> check df -hT 
   and checked for unmounted bricks
3. All brick are unmounted from mountpoint and topology info also 
   doesn't show any unmounted bricks or space getting used by deleted volumes.

Comment 30 Raghavendra Talur 2019-04-04 03:45:17 UTC
*** Bug 1584639 has been marked as a duplicate of this bug. ***