Bug 1461717 - Not release the .glusterfs directory after volume stop when cluster.brick-multiplex=on
Not release the .glusterfs directory after volume stop when cluster.brick-mul...
Status: NEW
Product: GlusterFS
Classification: Community
Component: core (Show other bugs)
3.10
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Mohit Agrawal
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-06-15 05:24 EDT by jsvisa
Modified: 2017-07-03 00:13 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description jsvisa 2017-06-15 05:24:49 EDT
Description of problem:

I'm using the brick multiplex feature, which was added in version 3.10.
After created and started 3 volume, I'm going to destory one of the volumes.

After the volume is stop and deleted, I want to unmount the disk which is hold by the volume before, but the unmount failed to a busy device.

Then using `lsof` to find who are using the files behind the disk, showed the process glusterfsd are hold the files:

# lsof | grep /path/to/disk | awk '{print $10}' | sort -u
.glusterfs/brick.db
.glusterfs/brick.db-shm
.glusterfs/brick.db-wal


Version-Release number of selected component (if applicable):
3.10.3

How reproducible:


Steps to Reproduce:
1. gluster volume create a,b,c 
2. gluster volume start a,b,c
3. gluster volume stop b
4. gluster volume delete b
5. unmount the disk path to volume-b

Actual results:

unmount failed


Expected results:

unmount success


Additional info:

Note You need to log in before you can comment on or make changes to this bug.