Bug 1145073 - memory/configuration snapshots are not deleted when deleting VM's disk,In a case those images were created on File domain
Summary: memory/configuration snapshots are not deleted when deleting VM's disk,In a c...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 3.5.0
Assignee: Idan Shaby
QA Contact: Ori Gofen
URL:
Whiteboard: storage
Depends On:
Blocks: rhev3.5beta3
TreeView+ depends on / blocked
 
Reported: 2014-09-22 10:33 UTC by Ori Gofen
Modified: 2016-05-26 01:49 UTC (History)
15 users (show)

Fixed In Version: org.ovirt.engine-root-3.5.0-14
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-02-16 19:09:05 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vdsm+engine logs (1.93 MB, application/x-gzip)
2014-09-22 10:33 UTC, Ori Gofen
no flags Details
vdsm+engine logs (1.19 MB, application/x-gzip)
2014-09-22 13:03 UTC, Ori Gofen
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 33429 0 master MERGED core: Remove Memory and OVF Volumes on VM Remove Never
oVirt gerrit 33432 0 ovirt-engine-3.5 MERGED core: Remove Memory and OVF Volumes on VM Remove Never
oVirt gerrit 33464 0 ovirt-engine-3.4 ABANDONED core: Remove Memory and OVF Volumes on VM Remove Never

Description Ori Gofen 2014-09-22 10:33:12 UTC
Created attachment 939970 [details]
vdsm+engine logs

Description of problem:
When taking a live snapshot,vdsm creates 2 new images and total of 3 volumes.
The 2 new images contain RAM & vm's configuration and are created on the master domain by default.
the third volume is added to vm's image volume chain(snapshot's data vol).

when deleting a vm with it's disk's(remove completely it's disks) which had live snapshots(RAM was saved),RAM and CONF images are not wiped out in a case the master domain is FILE

Version-Release number of selected component (if applicable):
vt3.1

How reproducible:
100%

Steps to Reproduce:
1.Have dc with 2 domains(Block and File,master is File)
2.create a vm+OS,run vm,take live snapshot with RAM
3.power of the VM,remove the vm and it's disks

Actual results:
RAM and Conf images are not wiped out

Expected results:
when removing vm+snapshots all the relevant data to the snapshots should be deleted.

Additional info:

Comment 1 Allon Mureinik 2014-09-22 11:46:35 UTC
The memory volume is located on domain 3b5ee215-d6c8-428e-95b0-737656595ebd:

2014-09-22 10:56:12,636 INFO  [org.ovirt.engine.core.bll.RemoveMemoryVolumesCommand] (ajp-/127.0.0.1:8702-2) [3fd8ea7e] Running command: RemoveMemoryVolumesCommand internal: true.
2014-09-22 10:56:12,681 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (ajp-/127.0.0.1:8702-2) [3fd8ea7e] START, DeleteImageGroupVDSCommand( storagePoolId = 00000002-0002-0002-0002-00000000006a, ignoreFailoverLimit = false, storageDomainId = 3b5ee215-d6c8-428e-95b0-737656595ebd, imageGroupId = ccf04fdd-f90f-4d4e-9767-b600a2a3098e, postZeros = false, forceDelete = false), log id: 6e4e2d58
2014-09-22 10:56:13,005 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (ajp-/127.0.0.1:8702-2) [3fd8ea7e] FINISH, DeleteImageGroupVDSCommand, log id: 6e4e2d58

This domain is clearly an NFS domain:
Thread-13::DEBUG::2014-09-22 10:59:05,687::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=nfs_1', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=Default', 'POOL_DOMAINS=3b5ee215-d6c8-428e-95b0-737656595ebd:Active', 'POOL_SPM_ID=-1', 'POOL_SPM_LVER=-1', 'POOL_UUID=00000002-0002-0002-0002-00000000006a', 'REMOTE_PATH=10.35.160.108:/RHEV/ogofen/1', 'ROLE=Master', 'SDUUID=3b5ee215-d6c8-428e-95b0-737656595ebd', 'TYPE=NFS', 'VERSION=3', '_SHA_CKSUM=597601f230c65cc70ead57ebaa8242d645e8b35d']

Since this is a file domain, wiping is meaningless, and should indeed not be performed (see bug 1097820).
This is not a bug.

Comment 2 Ori Gofen 2014-09-22 12:02:06 UTC
sorry Allon,When I meant wiped = deleted,the images are not even deleted from host.

Comment 3 Allon Mureinik 2014-09-22 12:20:51 UTC
The delete command is sent at 10:56:12

2014-09-22 10:56:12,636 INFO  [org.ovirt.engine.core.bll.RemoveMemoryVolumesCommand] (ajp-/127.0.0.1:8702-2) [3fd8ea7e] Running command: RemoveMemoryVolumesCommand internal: true.
2014-09-22 10:56:12,681 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (ajp-/127.0.0.1:8702-2) [3fd8ea7e] START, DeleteImageGroupVDSCommand( storagePoolId = 00000002-0002-0002-0002-00000000006a, ignoreFailoverLimit = false, storageDomainId = 3b5ee215-d6c8-428e-95b0-737656595ebd, imageGroupId = ccf04fdd-f90f-4d4e-9767-b600a2a3098e, postZeros = false, forceDelete = false), log id: 6e4e2d58
2014-09-22 10:56:13,005 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (ajp-/127.0.0.1:8702-2) [3fd8ea7e] FINISH, DeleteImageGroupVDSCommand, log id: 6e4e2d58
2014-09-22 10:56:13,073 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (ajp-/127.0.0.1:8702-2) [3fd8ea7e] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command d8ba90fc-d212-416a-8472-d52f810d06f3
2014-09-22 10:56:13,073 INFO  [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp-/127.0.0.1:8702-2) [3fd8ea7e] CommandMultiAsyncTasks::AttachTask: Attaching task 8504f388-bdc4-481a-877c-29ddb9b77cdf to command d8ba90fc-d212-416a-8472-d52f810d06f3.
2014-09-22 10:56:13,092 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (ajp-/127.0.0.1:8702-2) [3fd8ea7e] Adding task 8504f388-bdc4-481a-877c-29ddb9b77cdf (Parent Command RemoveMemoryVolumes, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet..

However, the SPM's log (vdsb-vdsm.log) only starts at 11:00, making it impossible to draw conclusions.
Can you please attach the log containing this timestamp?

Comment 4 Ori Gofen 2014-09-22 12:21:29 UTC
I have updated the Header,RAM and Conf images are not deleted from File domain,in any case.

Comment 5 Ori Gofen 2014-09-22 13:03:22 UTC
Created attachment 939988 [details]
vdsm+engine logs

reproduced with new logs,after cleaning engine.

steps:
1.add host (15:43:45)
2.create new nfs sd (15:47:43)
3.add vm+disks (15:53)
4.run vm and create live snapshot
tree view after live snapshot:
# tree
.
├── 00000002-0002-0002-0002-000000000150
│   ├── 643916ac-8c04-4e5e-86b8-bd66c566de82 -> /rhev/data-center/mnt/10.35.160.108:_RHEV_ogofen_1/643916ac-8c04-4e5e-86b8-bd66c566de82
│   └── mastersd -> /rhev/data-center/mnt/10.35.160.108:_RHEV_ogofen_1/643916ac-8c04-4e5e-86b8-bd66c566de82
└── mnt
    └── 10.35.160.108:_RHEV_ogofen_1
        ├── 643916ac-8c04-4e5e-86b8-bd66c566de82
        │   ├── dom_md
        │   │   ├── ids
        │   │   ├── inbox
        │   │   ├── leases
        │   │   ├── metadata
        │   │   └── outbox
        │   ├── images
        │   │   ├── 77f360fd-e896-4179-98f6-4fc7cd3a58b3
        │   │   │   ├── 45410f92-03aa-46f2-a34d-14ddad695586
        │   │   │   ├── 45410f92-03aa-46f2-a34d-14ddad695586.lease
        │   │   │   └── 45410f92-03aa-46f2-a34d-14ddad695586.meta
        │   │   ├── 96c6065c-32f1-41c0-a12b-6390e22fc557
        │   │   │   ├── 2eea7593-8e41-472e-8e4a-a4b523a2984e
        │   │   │   ├── 2eea7593-8e41-472e-8e4a-a4b523a2984e.lease
        │   │   │   ├── 2eea7593-8e41-472e-8e4a-a4b523a2984e.meta
        │   │   │   ├── c92878a6-cfe8-415e-bca0-bdc64f04f0ae
        │   │   │   ├── c92878a6-cfe8-415e-bca0-bdc64f04f0ae.lease
        │   │   │   └── c92878a6-cfe8-415e-bca0-bdc64f04f0ae.meta
        │   │   └── f45c6164-7298-4c43-88dc-7d8fb6e9ad33
        │   │       ├── 16d45b80-1b3d-4a6a-85ca-2e9d14b7bee9
        │   │       ├── 16d45b80-1b3d-4a6a-85ca-2e9d14b7bee9.lease
        │   │       └── 16d45b80-1b3d-4a6a-85ca-2e9d14b7bee9.meta
        │   └── master
        │       ├── tasks
        │       └── vms
        └── __DIRECT_IO_TEST__
5.power of the vm and remove it including it's disks
tree view after removal:
 # tree
.
├── 00000002-0002-0002-0002-000000000150
│   ├── 643916ac-8c04-4e5e-86b8-bd66c566de82 -> /rhev/data-center/mnt/10.35.160.108:_RHEV_ogofen_1/643916ac-8c04-4e5e-86b8-bd66c566de82
│   └── mastersd -> /rhev/data-center/mnt/10.35.160.108:_RHEV_ogofen_1/643916ac-8c04-4e5e-86b8-bd66c566de82
└── mnt
    └── 10.35.160.108:_RHEV_ogofen_1
        ├── 643916ac-8c04-4e5e-86b8-bd66c566de82
        │   ├── dom_md
        │   │   ├── ids
        │   │   ├── inbox
        │   │   ├── leases
        │   │   ├── metadata
        │   │   └── outbox
        │   ├── images
        │   │   ├── 77f360fd-e896-4179-98f6-4fc7cd3a58b3
        │   │   │   ├── 45410f92-03aa-46f2-a34d-14ddad695586
        │   │   │   ├── 45410f92-03aa-46f2-a34d-14ddad695586.lease
        │   │   │   └── 45410f92-03aa-46f2-a34d-14ddad695586.meta
        │   │   └── f45c6164-7298-4c43-88dc-7d8fb6e9ad33
        │   │       ├── 16d45b80-1b3d-4a6a-85ca-2e9d14b7bee9
        │   │       ├── 16d45b80-1b3d-4a6a-85ca-2e9d14b7bee9.lease
        │   │       └── 16d45b80-1b3d-4a6a-85ca-2e9d14b7bee9.meta
        │   └── master
        │       ├── tasks
        │       └── vms
        └── __DIRECT_IO_TEST__

those two redundant images are vm's RAM and Conf images

Comment 6 Allon Mureinik 2014-09-22 13:57:24 UTC
Seems like no remove image command is issued.
Omer - storage or virt?

Comment 7 Allon Mureinik 2014-09-22 16:59:16 UTC
*** Bug 1145188 has been marked as a duplicate of this bug. ***

Comment 8 Allon Mureinik 2014-09-22 16:59:58 UTC
*** Bug 1145254 has been marked as a duplicate of this bug. ***

Comment 10 Omer Frenkel 2014-09-23 10:05:34 UTC
arik can you please take a look?

Comment 11 Eyal Edri 2014-10-07 07:12:37 UTC
this bug status was moved to MODIFIED before engine vt5 was built,
hence moving to on_qa, if this was mistake and the fix isn't in,
please contact rhev-integ

Comment 12 Ori Gofen 2014-10-22 16:44:29 UTC
verified on vt7

Comment 13 Allon Mureinik 2015-02-16 19:09:05 UTC
RHEV-M 3.5.0 has been released, closing this bug.


Note You need to log in before you can comment on or make changes to this bug.