Bug 1571620 - [Tracker-RHGS-BZ#1631329] arbiter brick is not getting unmounted
Summary: [Tracker-RHGS-BZ#1631329] arbiter brick is not getting unmounted
Keywords:
Status: CLOSED DUPLICATE of bug 1584639
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhgs-server-container
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Saravanakumar
QA Contact: Nitin Goyal
URL:
Whiteboard:
Depends On: 1631329
Blocks: 1534953 OCS-3.11.1-Engineering-Proposed-BZs OCS-3.11.1-devel-triage-done
TreeView+ depends on / blocked
 
Reported: 2018-04-25 08:41 UTC by Nitin Goyal
Modified: 2019-04-04 05:14 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-04-04 05:14:05 UTC
Embargoed:


Attachments (Terms of Use)
it contains all log files and df output of pod where brick is not unmounted (1.79 KB, text/plain)
2018-04-25 09:49 UTC, Nitin Goyal
no flags Details

Description Nitin Goyal 2018-04-25 08:41:25 UTC
Description of problem: arbiter brick is not getting unmounted. It is still mounted in gluster pod. 


Version-Release number of selected component (if applicable): 6.0.0-11


How reproducible:


Steps to Reproduce:
1. create a pvc of 2 gb.
2. mount it on two clients.
3. write files into that volume from both the clients parallely until volume is not getting full.
4. when volume is volume and you are not able to write files in that vol delete the pvc.

Actual results: arbiter brick is still mounted


Expected results: arbiter brick should get unmounted.

Comment 2 Nitin Goyal 2018-04-25 09:49:44 UTC
Created attachment 1426507 [details]
it contains all log files and df output of pod where brick is not unmounted

Comment 3 Nitin Goyal 2018-05-02 12:31:18 UTC
It is not easy to reproducible it is a random behaviour.

Comment 4 John Mulligan 2018-05-15 17:51:19 UTC
Does this only happen when you are using arbiter volumes? If you follow the exact same procedures with non-arbiter replica 3 can you get a similar result?

Comment 5 Michael Adam 2018-05-15 19:48:33 UTC
This might be the same root cause as BZ #1565977.

- do you actually kill the client pods (unmount the clients)
- does the PVC delete operation (seem to) succeed?
- as john said: does it also happen with replica-3?

Comment 8 Nitin Goyal 2018-05-16 05:31:32 UTC
There is one link in the attachment in first line which contain all the logs.

Comment 10 Nitin Goyal 2018-05-16 10:36:39 UTC
(In reply to Michael Adam from comment #5)
> This might be the same root cause as BZ #1565977.
> 
> - do you actually kill the client pods (unmount the clients)
No i did not unmount that volume. I was performing some I/O operations.

> - does the PVC delete operation (seem to) succeed?
Yes PVC delete operation was success.

> - as john said: does it also happen with replica-3?
I have not tried it with replica-3.

Comment 29 Sri Vignesh Selvan 2019-01-23 07:27:29 UTC
Tried reproducing with latest 3.11.1 builds
glusterfs-api-3.12.2-32.el7rhgs.x86_64
glusterfs-cli-3.12.2-32.el7rhgs.x86_64
python2-gluster-3.12.2-32.el7rhgs.x86_64
glusterfs-fuse-3.12.2-32.el7rhgs.x86_64
glusterfs-geo-replication-3.12.2-32.el7rhgs.x86_64
glusterfs-libs-3.12.2-32.el7rhgs.x86_64
glusterfs-3.12.2-32.el7rhgs.x86_64
glusterfs-client-xlators-3.12.2-32.el7rhgs.x86_64
glusterfs-server-3.12.2-32.el7rhgs.x86_64
gluster-block-0.2.1-30.el7rhgs.x86_64
heketi-client-8.0.0-8.el7rhgs.x86_64

1. created 100 volumes, parallely deleted all created 100 volumes.
   Tried with replica and arbiter volume types. as this issue was seen in replica and arbiter
   volume types.
2. checked topology info and oc rsh <gluster pods> check df -hT 
   and checked for unmounted bricks
3. All brick are unmounted from mountpoint and topology info also 
   doesn't show any unmounted bricks or space getting used by deleted volumes.

Comment 30 Raghavendra Talur 2019-04-04 03:45:17 UTC
*** Bug 1584639 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.