Bug 987994 - Live snapshot leaves disks with no name on storage domains
Live snapshot leaves disks with no name on storage domains
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
x86_64 Linux
unspecified Severity high
: ---
: 3.3.0
Assigned To: Sergey Gotliv
Depends On:
  Show dependency treegraph
Reported: 2013-07-24 10:05 EDT by Jakub Libosvar
Modified: 2016-04-18 02:59 EDT (History)
12 users (show)

See Also:
Fixed In Version: is18
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
scohen: Triaged+

Attachments (Terms of Use)
db dump, engine, vdsm logs (679.34 KB, application/gzip)
2013-07-24 10:05 EDT, Jakub Libosvar
no flags Details

External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 18647 None None None Never

  None (edit)
Description Jakub Libosvar 2013-07-24 10:05:16 EDT
Created attachment 777803 [details]
db dump, engine, vdsm logs

Description of problem:
Creating live snapshot of vm with two disks and then removing the vm leaves disks in system. These disks cannot be seen in "Disks" collection but only on storage domain (ie empty Disks tab, but Disks tab on Storage domain tab shows disks with no name). These disks cannot be removed and whole datacenter must be forcibly removed.

Logical volumes of snapshots are left on storage as well:
  59158d38-028e-4a40-9e40-0198021b1cae 5cf50403-521d-43d8-9318-6229013a1b60 -wi------   1.00g                                             
  6c0b1839-ad3e-4eb6-9316-a11d21b3db5a 5cf50403-521d-43d8-9318-6229013a1b60 -wi------   1.00g                                             
  7380017e-8202-48fc-ae8f-3f1f6c52eebb 5cf50403-521d-43d8-9318-6229013a1b60 -wi------   1.00g                                             
  e313df65-5e8a-452f-b1c4-89363e611917 5cf50403-521d-43d8-9318-6229013a1b60 -wi------   1.00g   

But cleaned from dm:
# lsblk 
NAME                                                          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                                             8:0    0 232.9G  0 disk  
├─sda1                                                          8:1    0   250M  0 part  /boot
└─sda2                                                          8:2    0 232.7G  0 part  
  ├─vg_sf02-lv_root (dm-0)                                    253:0    0  69.5G  0 lvm   /
  ├─vg_sf02-lv_swap (dm-1)                                    253:1    0     1G  0 lvm   [SWAP]
  └─vg_sf02-lv_home (dm-2)                                    253:2    0   200M  0 lvm   /home
sdc                                                             8:32   0    21G  0 disk  
└─36006048c0714062ba4b91f6a904bc07f (dm-4)                    253:4    0    21G  0 mpath 
sdd                                                             8:48   0    21G  0 disk  
└─36006048cc786b195adb3de2b6d613cbc (dm-8)                    253:8    0    21G  0 mpath 
sdb                                                             8:16   0   200G  0 disk  
└─36006048c78acaa6eac8ebc05bf73dee1 (dm-5)                    253:5    0   200G  0 mpath 
  ├─1b573d9d--6af4--4135--b8ec--6880e9d4d359-metadata (dm-15) 253:15   0   512M  0 lvm   
  ├─1b573d9d--6af4--4135--b8ec--6880e9d4d359-leases (dm-16)   253:16   0     2G  0 lvm   
  ├─1b573d9d--6af4--4135--b8ec--6880e9d4d359-ids (dm-17)      253:17   0   128M  0 lvm   
  ├─1b573d9d--6af4--4135--b8ec--6880e9d4d359-inbox (dm-18)    253:18   0   128M  0 lvm   
  ├─1b573d9d--6af4--4135--b8ec--6880e9d4d359-outbox (dm-19)   253:19   0   128M  0 lvm   
  └─1b573d9d--6af4--4135--b8ec--6880e9d4d359-master (dm-20)   253:20   0     1G  0 lvm   
sde                                                             8:64   0    21G  0 disk  
└─36006048c12952607a6254682d950135a (dm-6)                    253:6    0    21G  0 mpath 
sdf                                                             8:80   0   250G  0 disk  
└─36006048c14e9eb8f668dfc53ea5995ca (dm-7)                    253:7    0   250G  0 mpath 
  ├─5cf50403--521d--43d8--9318--6229013a1b60-metadata (dm-9)  253:9    0   512M  0 lvm   
  ├─5cf50403--521d--43d8--9318--6229013a1b60-leases (dm-10)   253:10   0     2G  0 lvm   
  ├─5cf50403--521d--43d8--9318--6229013a1b60-ids (dm-11)      253:11   0   128M  0 lvm   
  ├─5cf50403--521d--43d8--9318--6229013a1b60-inbox (dm-12)    253:12   0   128M  0 lvm   
  ├─5cf50403--521d--43d8--9318--6229013a1b60-outbox (dm-13)   253:13   0   128M  0 lvm   
  └─5cf50403--521d--43d8--9318--6229013a1b60-master (dm-14)   253:14   0     1G  0 lvm   /rhev/data-center/mnt/blockSD/5cf50403-521d-43d8-9318-622901
sdg                                                             8:96   0    21G  0 disk  
└─36006048ce7ff8320bd02378f18cf9712 (dm-3)                    253:3    0    21G  0 mpath 
# dmsetup ls
5cf50403--521d--43d8--9318--6229013a1b60-leases	(253:10)
36006048c12952607a6254682d950135a	(253:6)
36006048ce7ff8320bd02378f18cf9712	(253:3)
36006048c0714062ba4b91f6a904bc07f	(253:4)
36006048c78acaa6eac8ebc05bf73dee1	(253:5)
1b573d9d--6af4--4135--b8ec--6880e9d4d359-master	(253:20)
5cf50403--521d--43d8--9318--6229013a1b60-inbox	(253:12)
1b573d9d--6af4--4135--b8ec--6880e9d4d359-outbox	(253:19)
1b573d9d--6af4--4135--b8ec--6880e9d4d359-metadata	(253:15)
5cf50403--521d--43d8--9318--6229013a1b60-ids	(253:11)
5cf50403--521d--43d8--9318--6229013a1b60-metadata	(253:9)
5cf50403--521d--43d8--9318--6229013a1b60-master	(253:14)
5cf50403--521d--43d8--9318--6229013a1b60-outbox	(253:13)
36006048cc786b195adb3de2b6d613cbc	(253:8)
vg_sf02-lv_home	(253:2)
36006048c14e9eb8f668dfc53ea5995ca	(253:7)
1b573d9d--6af4--4135--b8ec--6880e9d4d359-ids	(253:17)
1b573d9d--6af4--4135--b8ec--6880e9d4d359-leases	(253:16)
vg_sf02-lv_swap	(253:1)
1b573d9d--6af4--4135--b8ec--6880e9d4d359-inbox	(253:18)
vg_sf02-lv_root	(253:0)

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Have VM with two disks
2. Create live snapshot
3. Remove VM

Actual results:
Disks are left on storage domain

Expected results:
Disks are removed from system

Additional info:
Worked in 3.2
Failing live snapshot sanity test on 3.3
Comment 3 Maor 2013-07-27 21:43:41 EDT
Are you sure this always reproduced?

IMO https://bugzilla.redhat.com/show_bug.cgi?id=978975 might solve this issue on remove failure.
Comment 5 Maor 2013-07-28 18:05:50 EDT
I tried to reproduce the scenario with live snapshot but it didn't reproduced.
I think that the issue here is not live snapshot but commit of a previewed snapshot.

The reproduced steps which I found from the logs are:
1. Create an images disk in a VM
2. Create a snapshot
3. Create a new image disk
4. preview the snapshot was created
5. Commit the snapshot

The last disk which was created has been removed from the DB but it was not removed from the VDSM completely.
(The code that should be checked here is _imagesToDelete list which being initialized in the RestoreFromSnapshotCommand and passed as an argument to DestroyImageVdsCommand, probably destroy Image should be called here.)

This bug is not a new behaviour since it appears there were always volumes leftovers in the storage, but it was not an issue until we introduced the new disks subtab which we also use the same query probably to find if there are orphaned images when deactivating a storage domain.

Regarding the disks which could not be removed, I think that the problem here was that since those disks were already removed from the DB there are no permissions related to them, therefore we can't remove those disks from the storage.
Comment 8 Jakub Libosvar 2013-08-01 08:15:25 EDT
For the record: Maor was right. The correct reproducer is:
1. Create an images disk in a VM
2. Create a snapshot
3. Preview the created snapshot
4. Commit the snapshot
Comment 9 Sergey Gotliv 2013-08-25 16:26:25 EDT
I am removing Regression keyword based on comment 5 of Maor and comment 8 where Kuba agrees that Maor's step to reproduce are correct.
Comment 10 vvyazmin@redhat.com 2013-10-13 11:21:29 EDT
Tested on FCP Data Centers
Verified, tested on RHEVM 3.3 - IS18 environment:

Host OS: RHEL 6.5

RHEVM:  rhevm-3.3.0-0.25.beta1.el6ev.noarch
PythonSDK:  rhevm-sdk-python-
VDSM:  vdsm-4.13.0-0.2.beta1.el6ev.x86_64
LIBVIRT:  libvirt-0.10.2-27.el6.x86_64
QEMU & KVM:  qemu-kvm-rhev-
SANLOCK:  sanlock-2.8-1.el6.x86_64
Comment 13 Itamar Heim 2014-01-21 17:33:04 EST
Closing - RHEV 3.3 Released
Comment 14 Itamar Heim 2014-01-21 17:33:09 EST
Closing - RHEV 3.3 Released

Note You need to log in before you can comment on or make changes to this bug.