Bug 1571620
Summary: | [Tracker-RHGS-BZ#1631329] arbiter brick is not getting unmounted | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nitin Goyal <nigoyal> | ||||
Component: | rhgs-server-container | Assignee: | Saravanakumar <sarumuga> | ||||
Status: | CLOSED DUPLICATE | QA Contact: | Nitin Goyal <nigoyal> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | rhgs-3.3 | CC: | amukherj, hchiramm, jmulligan, kramdoss, madam, nberry, nigoyal, pprakash, rhs-bugs, rtalur, sankarshan, sselvan, storage-qa-internal | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2019-04-04 05:14:05 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1631329 | ||||||
Bug Blocks: | 1534953, 1641685, 1641915 | ||||||
Attachments: |
|
Description
Nitin Goyal
2018-04-25 08:41:25 UTC
Created attachment 1426507 [details]
it contains all log files and df output of pod where brick is not unmounted
It is not easy to reproducible it is a random behaviour. Does this only happen when you are using arbiter volumes? If you follow the exact same procedures with non-arbiter replica 3 can you get a similar result? This might be the same root cause as BZ #1565977. - do you actually kill the client pods (unmount the clients) - does the PVC delete operation (seem to) succeed? - as john said: does it also happen with replica-3? There is one link in the attachment in first line which contain all the logs. (In reply to Michael Adam from comment #5) > This might be the same root cause as BZ #1565977. > > - do you actually kill the client pods (unmount the clients) No i did not unmount that volume. I was performing some I/O operations. > - does the PVC delete operation (seem to) succeed? Yes PVC delete operation was success. > - as john said: does it also happen with replica-3? I have not tried it with replica-3. Tried reproducing with latest 3.11.1 builds glusterfs-api-3.12.2-32.el7rhgs.x86_64 glusterfs-cli-3.12.2-32.el7rhgs.x86_64 python2-gluster-3.12.2-32.el7rhgs.x86_64 glusterfs-fuse-3.12.2-32.el7rhgs.x86_64 glusterfs-geo-replication-3.12.2-32.el7rhgs.x86_64 glusterfs-libs-3.12.2-32.el7rhgs.x86_64 glusterfs-3.12.2-32.el7rhgs.x86_64 glusterfs-client-xlators-3.12.2-32.el7rhgs.x86_64 glusterfs-server-3.12.2-32.el7rhgs.x86_64 gluster-block-0.2.1-30.el7rhgs.x86_64 heketi-client-8.0.0-8.el7rhgs.x86_64 1. created 100 volumes, parallely deleted all created 100 volumes. Tried with replica and arbiter volume types. as this issue was seen in replica and arbiter volume types. 2. checked topology info and oc rsh <gluster pods> check df -hT and checked for unmounted bricks 3. All brick are unmounted from mountpoint and topology info also doesn't show any unmounted bricks or space getting used by deleted volumes. *** Bug 1584639 has been marked as a duplicate of this bug. *** |