Bug 1461717
| Summary: | Not release the .glusterfs directory after volume stop when cluster.brick-multiplex=on | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | jsvisa <delweng> |
| Component: | core | Assignee: | Mohit Agrawal <moagrawa> |
| Status: | CLOSED EOL | QA Contact: | |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.10 | CC: | amukherj, bugs |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | Bug | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
This bug reported is against a version of Gluster that is no longer maintained (or has been EOL'd). See https://www.gluster.org/release-schedule/ for the versions currently maintained. As a result this bug is being closed. If the bug persists on a maintained version of gluster or against the mainline gluster repository, request that it be reopened and the Version field be marked appropriately. This bug reported is against a version of Gluster that is no longer maintained (or has been EOL'd). See https://www.gluster.org/release-schedule/ for the versions currently maintained. As a result this bug is being closed. If the bug persists on a maintained version of gluster or against the mainline gluster repository, request that it be reopened and the Version field be marked appropriately. |
Description of problem: I'm using the brick multiplex feature, which was added in version 3.10. After created and started 3 volume, I'm going to destory one of the volumes. After the volume is stop and deleted, I want to unmount the disk which is hold by the volume before, but the unmount failed to a busy device. Then using `lsof` to find who are using the files behind the disk, showed the process glusterfsd are hold the files: # lsof | grep /path/to/disk | awk '{print $10}' | sort -u .glusterfs/brick.db .glusterfs/brick.db-shm .glusterfs/brick.db-wal Version-Release number of selected component (if applicable): 3.10.3 How reproducible: Steps to Reproduce: 1. gluster volume create a,b,c 2. gluster volume start a,b,c 3. gluster volume stop b 4. gluster volume delete b 5. unmount the disk path to volume-b Actual results: unmount failed Expected results: unmount success Additional info: