Description of problem: arbiter brick is not getting unmounted. It is still mounted in gluster pod. Version-Release number of selected component (if applicable): 6.0.0-11 How reproducible: Steps to Reproduce: 1. create a pvc of 2 gb. 2. mount it on two clients. 3. write files into that volume from both the clients parallely until volume is not getting full. 4. when volume is volume and you are not able to write files in that vol delete the pvc. Actual results: arbiter brick is still mounted Expected results: arbiter brick should get unmounted.
Created attachment 1426507 [details] it contains all log files and df output of pod where brick is not unmounted
It is not easy to reproducible it is a random behaviour.
Does this only happen when you are using arbiter volumes? If you follow the exact same procedures with non-arbiter replica 3 can you get a similar result?
This might be the same root cause as BZ #1565977. - do you actually kill the client pods (unmount the clients) - does the PVC delete operation (seem to) succeed? - as john said: does it also happen with replica-3?
There is one link in the attachment in first line which contain all the logs.
(In reply to Michael Adam from comment #5) > This might be the same root cause as BZ #1565977. > > - do you actually kill the client pods (unmount the clients) No i did not unmount that volume. I was performing some I/O operations. > - does the PVC delete operation (seem to) succeed? Yes PVC delete operation was success. > - as john said: does it also happen with replica-3? I have not tried it with replica-3.
Tried reproducing with latest 3.11.1 builds glusterfs-api-3.12.2-32.el7rhgs.x86_64 glusterfs-cli-3.12.2-32.el7rhgs.x86_64 python2-gluster-3.12.2-32.el7rhgs.x86_64 glusterfs-fuse-3.12.2-32.el7rhgs.x86_64 glusterfs-geo-replication-3.12.2-32.el7rhgs.x86_64 glusterfs-libs-3.12.2-32.el7rhgs.x86_64 glusterfs-3.12.2-32.el7rhgs.x86_64 glusterfs-client-xlators-3.12.2-32.el7rhgs.x86_64 glusterfs-server-3.12.2-32.el7rhgs.x86_64 gluster-block-0.2.1-30.el7rhgs.x86_64 heketi-client-8.0.0-8.el7rhgs.x86_64 1. created 100 volumes, parallely deleted all created 100 volumes. Tried with replica and arbiter volume types. as this issue was seen in replica and arbiter volume types. 2. checked topology info and oc rsh <gluster pods> check df -hT and checked for unmounted bricks 3. All brick are unmounted from mountpoint and topology info also doesn't show any unmounted bricks or space getting used by deleted volumes.
*** Bug 1584639 has been marked as a duplicate of this bug. ***