Bug 987994 - Live snapshot leaves disks with no name on storage domains
Summary: Live snapshot leaves disks with no name on storage domains
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.3.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 3.3.0
Assignee: Sergey Gotliv
QA Contact: yeylon@redhat.com
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-24 14:05 UTC by Jakub Libosvar
Modified: 2016-04-18 06:59 UTC (History)
12 users (show)

Fixed In Version: is18
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
oVirt Team: Storage
Target Upstream Version:
Embargoed:
scohen: Triaged+


Attachments (Terms of Use)
db dump, engine, vdsm logs (679.34 KB, application/gzip)
2013-07-24 14:05 UTC, Jakub Libosvar
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 18647 0 None None None Never

Description Jakub Libosvar 2013-07-24 14:05:16 UTC
Created attachment 777803 [details]
db dump, engine, vdsm logs

Description of problem:
Creating live snapshot of vm with two disks and then removing the vm leaves disks in system. These disks cannot be seen in "Disks" collection but only on storage domain (ie empty Disks tab, but Disks tab on Storage domain tab shows disks with no name). These disks cannot be removed and whole datacenter must be forcibly removed.

Logical volumes of snapshots are left on storage as well:
#lvs
  59158d38-028e-4a40-9e40-0198021b1cae 5cf50403-521d-43d8-9318-6229013a1b60 -wi------   1.00g                                             
  6c0b1839-ad3e-4eb6-9316-a11d21b3db5a 5cf50403-521d-43d8-9318-6229013a1b60 -wi------   1.00g                                             
  7380017e-8202-48fc-ae8f-3f1f6c52eebb 5cf50403-521d-43d8-9318-6229013a1b60 -wi------   1.00g                                             
  e313df65-5e8a-452f-b1c4-89363e611917 5cf50403-521d-43d8-9318-6229013a1b60 -wi------   1.00g   

But cleaned from dm:
# lsblk 
NAME                                                          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                                             8:0    0 232.9G  0 disk  
├─sda1                                                          8:1    0   250M  0 part  /boot
└─sda2                                                          8:2    0 232.7G  0 part  
  ├─vg_sf02-lv_root (dm-0)                                    253:0    0  69.5G  0 lvm   /
  ├─vg_sf02-lv_swap (dm-1)                                    253:1    0     1G  0 lvm   [SWAP]
  └─vg_sf02-lv_home (dm-2)                                    253:2    0   200M  0 lvm   /home
sdc                                                             8:32   0    21G  0 disk  
└─36006048c0714062ba4b91f6a904bc07f (dm-4)                    253:4    0    21G  0 mpath 
sdd                                                             8:48   0    21G  0 disk  
└─36006048cc786b195adb3de2b6d613cbc (dm-8)                    253:8    0    21G  0 mpath 
sdb                                                             8:16   0   200G  0 disk  
└─36006048c78acaa6eac8ebc05bf73dee1 (dm-5)                    253:5    0   200G  0 mpath 
  ├─1b573d9d--6af4--4135--b8ec--6880e9d4d359-metadata (dm-15) 253:15   0   512M  0 lvm   
  ├─1b573d9d--6af4--4135--b8ec--6880e9d4d359-leases (dm-16)   253:16   0     2G  0 lvm   
  ├─1b573d9d--6af4--4135--b8ec--6880e9d4d359-ids (dm-17)      253:17   0   128M  0 lvm   
  ├─1b573d9d--6af4--4135--b8ec--6880e9d4d359-inbox (dm-18)    253:18   0   128M  0 lvm   
  ├─1b573d9d--6af4--4135--b8ec--6880e9d4d359-outbox (dm-19)   253:19   0   128M  0 lvm   
  └─1b573d9d--6af4--4135--b8ec--6880e9d4d359-master (dm-20)   253:20   0     1G  0 lvm   
sde                                                             8:64   0    21G  0 disk  
└─36006048c12952607a6254682d950135a (dm-6)                    253:6    0    21G  0 mpath 
sdf                                                             8:80   0   250G  0 disk  
└─36006048c14e9eb8f668dfc53ea5995ca (dm-7)                    253:7    0   250G  0 mpath 
  ├─5cf50403--521d--43d8--9318--6229013a1b60-metadata (dm-9)  253:9    0   512M  0 lvm   
  ├─5cf50403--521d--43d8--9318--6229013a1b60-leases (dm-10)   253:10   0     2G  0 lvm   
  ├─5cf50403--521d--43d8--9318--6229013a1b60-ids (dm-11)      253:11   0   128M  0 lvm   
  ├─5cf50403--521d--43d8--9318--6229013a1b60-inbox (dm-12)    253:12   0   128M  0 lvm   
  ├─5cf50403--521d--43d8--9318--6229013a1b60-outbox (dm-13)   253:13   0   128M  0 lvm   
  └─5cf50403--521d--43d8--9318--6229013a1b60-master (dm-14)   253:14   0     1G  0 lvm   /rhev/data-center/mnt/blockSD/5cf50403-521d-43d8-9318-622901
sdg                                                             8:96   0    21G  0 disk  
└─36006048ce7ff8320bd02378f18cf9712 (dm-3)                    253:3    0    21G  0 mpath 
# dmsetup ls
5cf50403--521d--43d8--9318--6229013a1b60-leases	(253:10)
36006048c12952607a6254682d950135a	(253:6)
36006048ce7ff8320bd02378f18cf9712	(253:3)
36006048c0714062ba4b91f6a904bc07f	(253:4)
36006048c78acaa6eac8ebc05bf73dee1	(253:5)
1b573d9d--6af4--4135--b8ec--6880e9d4d359-master	(253:20)
5cf50403--521d--43d8--9318--6229013a1b60-inbox	(253:12)
1b573d9d--6af4--4135--b8ec--6880e9d4d359-outbox	(253:19)
1b573d9d--6af4--4135--b8ec--6880e9d4d359-metadata	(253:15)
5cf50403--521d--43d8--9318--6229013a1b60-ids	(253:11)
5cf50403--521d--43d8--9318--6229013a1b60-metadata	(253:9)
5cf50403--521d--43d8--9318--6229013a1b60-master	(253:14)
5cf50403--521d--43d8--9318--6229013a1b60-outbox	(253:13)
36006048cc786b195adb3de2b6d613cbc	(253:8)
vg_sf02-lv_home	(253:2)
36006048c14e9eb8f668dfc53ea5995ca	(253:7)
1b573d9d--6af4--4135--b8ec--6880e9d4d359-ids	(253:17)
1b573d9d--6af4--4135--b8ec--6880e9d4d359-leases	(253:16)
vg_sf02-lv_swap	(253:1)
1b573d9d--6af4--4135--b8ec--6880e9d4d359-inbox	(253:18)
vg_sf02-lv_root	(253:0)


Version-Release number of selected component (if applicable):
rhevm-3.3.0-0.9.master.el6ev.noarch
vdsm-4.12.0-rc1.12.git8ee6885.el6.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Have VM with two disks
2. Create live snapshot
3. Remove VM

Actual results:
Disks are left on storage domain

Expected results:
Disks are removed from system

Additional info:
Worked in 3.2
Failing live snapshot sanity test on 3.3

Comment 3 Maor 2013-07-28 01:43:41 UTC
Are you sure this always reproduced?

IMO https://bugzilla.redhat.com/show_bug.cgi?id=978975 might solve this issue on remove failure.

Comment 5 Maor 2013-07-28 22:05:50 UTC
I tried to reproduce the scenario with live snapshot but it didn't reproduced.
I think that the issue here is not live snapshot but commit of a previewed snapshot.

The reproduced steps which I found from the logs are:
1. Create an images disk in a VM
2. Create a snapshot
3. Create a new image disk
4. preview the snapshot was created
5. Commit the snapshot

The last disk which was created has been removed from the DB but it was not removed from the VDSM completely.
(The code that should be checked here is _imagesToDelete list which being initialized in the RestoreFromSnapshotCommand and passed as an argument to DestroyImageVdsCommand, probably destroy Image should be called here.)

This bug is not a new behaviour since it appears there were always volumes leftovers in the storage, but it was not an issue until we introduced the new disks subtab which we also use the same query probably to find if there are orphaned images when deactivating a storage domain.

Regarding the disks which could not be removed, I think that the problem here was that since those disks were already removed from the DB there are no permissions related to them, therefore we can't remove those disks from the storage.

Comment 8 Jakub Libosvar 2013-08-01 12:15:25 UTC
For the record: Maor was right. The correct reproducer is:
1. Create an images disk in a VM
2. Create a snapshot
3. Preview the created snapshot
4. Commit the snapshot

Comment 9 Sergey Gotliv 2013-08-25 20:26:25 UTC
I am removing Regression keyword based on comment 5 of Maor and comment 8 where Kuba agrees that Maor's step to reproduce are correct.

Comment 10 vvyazmin@redhat.com 2013-10-13 15:21:29 UTC
Tested on FCP Data Centers
Verified, tested on RHEVM 3.3 - IS18 environment:

Host OS: RHEL 6.5

RHEVM:  rhevm-3.3.0-0.25.beta1.el6ev.noarch
PythonSDK:  rhevm-sdk-python-3.3.0.15-1.el6ev.noarch
VDSM:  vdsm-4.13.0-0.2.beta1.el6ev.x86_64
LIBVIRT:  libvirt-0.10.2-27.el6.x86_64
QEMU & KVM:  qemu-kvm-rhev-0.12.1.2-2.412.el6.x86_64
SANLOCK:  sanlock-2.8-1.el6.x86_64

Comment 13 Itamar Heim 2014-01-21 22:33:04 UTC
Closing - RHEV 3.3 Released

Comment 14 Itamar Heim 2014-01-21 22:33:09 UTC
Closing - RHEV 3.3 Released


Note You need to log in before you can comment on or make changes to this bug.