Bug 1331226 - Cloned volume is not cleaned up on deletion
Summary: Cloned volume is not cleaned up on deletion
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: GlusterFS
Classification: Community
Component: snapshot
Version: 3.7.11
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
Assignee: Avra Sengupta
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-28 03:24 UTC by Luis Pabón
Modified: 2016-11-08 22:26 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-10-21 10:06:44 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Luis Pabón 2016-04-28 03:24:23 UTC
Description of problem:
Once a cloned volume is deleted, there should be a way to destroy it and remove all the mount points and logical volumes created by GlusterFS

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
- gluster volume create myvol ...
- gluster snapshot create mysnap myvol
- gluster snapshot activate mysnap
- gluster snapshot clone myclone mysnap
- gluster snapshot deactivate mysnap
- gluster snapshot delete mysnap 
- gluster volume delete myclone


Actual results:
All LVs are still mounted and space still allocated in thin pools

Expected results:
All LVs unmounted and destroyed to return space back to thin pools.

or do one more command to fully destroy:

- gluster snapshot clone destroy myclone

Additional info:

Comment 1 Avra Sengupta 2016-10-21 10:06:44 UTC
The behaviour of snapshot clone is in sync with current volume delete's behaviour where we do not cleanup the volume's bricks, irrespective of how the bricks were created (be it user created or snapshot restored).

I believe that is the right thing to do, as we do not want to mess up with users backend data. Closing it as it's not a bug, but an expected behaviour.


Note You need to log in before you can comment on or make changes to this bug.