Bug 1147085 - Memory volumes not deleted when removing a vm with snapshots
Summary: Memory volumes not deleted when removing a vm with snapshots
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: oVirt
Classification: Retired
Component: ovirt-engine-core
Version: 3.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.5.0
Assignee: Idan Shaby
QA Contact: Ori Gofen
URL:
Whiteboard: storage
Depends On:
Blocks: 1073943
TreeView+ depends on / blocked
 
Reported: 2014-09-26 21:01 UTC by Nir Soffer
Modified: 2016-02-10 19:43 UTC (History)
8 users (show)

Fixed In Version: ovirt-3.5.0_rc4
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-10-17 12:38:25 UTC
oVirt Team: Storage


Attachments (Terms of Use)
engine log (754.59 KB, text/plain)
2014-09-26 21:01 UTC, Nir Soffer
no flags Details
vdsm log (236.76 KB, application/x-xz)
2014-09-26 21:02 UTC, Nir Soffer
no flags Details


Links
System ID Priority Status Summary Last Updated
oVirt gerrit 33429 master MERGED core: Remove Memory and OVF Volumes on VM Remove Never
oVirt gerrit 33432 ovirt-engine-3.5 MERGED core: Remove Memory and OVF Volumes on VM Remove Never
oVirt gerrit 33464 ovirt-engine-3.4 ABANDONED core: Remove Memory and OVF Volumes on VM Remove Never

Description Nir Soffer 2014-09-26 21:01:27 UTC
Created attachment 941720 [details]
engine log

Description of problem:

When creating snapshot including memory snapshot, a new volume is
created for the memory contents. When removing a vm, the memory 
snapshot is not deleted. 

Version-Release number of selected component (if applicable):
oVirt Engine Version: 3.5.0-0.0.master.20140911091402.gite1c5ffd.fc20
vdsm master 29defc3faab3d7

How reproducible:
Always

Steps to Reproduce:
1. Create vm with 1G memory and one 1G thin provisioning disk
2. Run vm
3. Create snapshot (ensure that "save memory" is checked)
4. Stop vm
5. Remove vm

Actual results:
Memory volumes not removed, and the domain holding them free memory is smaller
then expected (10G instead of 15G).

The only way to remove the volume is to login to one of the hosts and remove
the unwanted lvs, but there is no easy way to detect that these volumes are 
indeed the unused.

Expected results:
Memory volumes should be removed when vm is removed

Additional info:

For testing this issue, I created two new iscsi storage domains, and repeated
the steps above 2 times.

This is the master domain contents at the end of the test.

# lvs ff559f46-c495-4f6b-901c-2a624042a050
  LV                                   VG                                   Attr       LSize
  3149a0f8-82d3-43a4-b7e6-d6033485afb0 ff559f46-c495-4f6b-901c-2a624042a050 -wi------- 128.00m
  75b7d7b4-5c1a-48fe-a079-0f6529bd8968 ff559f46-c495-4f6b-901c-2a624042a050 -wi------- 128.00m
  ids                                  ff559f46-c495-4f6b-901c-2a624042a050 -wi-ao---- 128.00m
  inbox                                ff559f46-c495-4f6b-901c-2a624042a050 -wi-a----- 128.00m
  leases                               ff559f46-c495-4f6b-901c-2a624042a050 -wi-a-----   2.00g
  master                               ff559f46-c495-4f6b-901c-2a624042a050 -wi-ao----   1.00g
  metadata                             ff559f46-c495-4f6b-901c-2a624042a050 -wi-a----- 512.00m
  outbox                               ff559f46-c495-4f6b-901c-2a624042a050 -wi-a----- 128.00m

The two 128M volumes are ovf store volumes.

This is the other domain contents at the end of the test:

# lvs 3e419414-ee73-47e8-809d-60de4a88403c
  LV                                   VG                                   Attr       LSize
  0ffd2268-ef48-4fbf-aaf6-73f4249d5f40 3e419414-ee73-47e8-809d-60de4a88403c -wi-------   1.38g
  58b9b9f8-b2c7-441f-acdb-2c0616dff271 3e419414-ee73-47e8-809d-60de4a88403c -wi-------   1.38g
  70aafeeb-6307-4451-a558-d707310667ab 3e419414-ee73-47e8-809d-60de4a88403c -wi-------   1.00g
  e89a3dc7-947c-4692-b5d1-eb8f5a4958cb 3e419414-ee73-47e8-809d-60de4a88403c -wi-------   1.00g
  ids                                  3e419414-ee73-47e8-809d-60de4a88403c -wi-ao---- 128.00m
  inbox                                3e419414-ee73-47e8-809d-60de4a88403c -wi-a----- 128.00m
  leases                               3e419414-ee73-47e8-809d-60de4a88403c -wi-a-----   2.00g
  master                               3e419414-ee73-47e8-809d-60de4a88403c -wi-a-----   1.00g
  metadata                             3e419414-ee73-47e8-809d-60de4a88403c -wi-a----- 512.00m
  outbox                               3e419414-ee73-47e8-809d-60de4a88403c -wi-a----- 128.00m

Note that the vm was created on the master domain 
(ff559f46-c495-4f6b-901c-2a624042a050), but the memory volumes are created
on the other domain (3e419414-ee73-47e8-809d-60de4a88403c) - this does not 
make sense and probably the root cause of this.

Looking in vdsm log, we can see that only images in the master domain
(ff559f46-c495-4f6b-901c-2a624042a050) were deleted. There is no other
deleteImage or deleteVolume request in vdsm log.

# grep 'Run and protect: deleteImage' vdsm.log 
Thread-34::INFO::2014-09-26 22:57:25,005::logUtils::48::dispatcher::(wrapper) Run and protect: deleteImage(sdUUID='ff559f46-c495-4f6b-901c-2a624042a050', spUUID='b86b687a-d073-497a-ac8a-249025419a3e', imgUUID='a256fc11-f9ab-4391-af78-2a96e27fe39b', postZero='false', force='false')
Thread-34::INFO::2014-09-26 22:57:25,462::logUtils::51::dispatcher::(wrapper) Run and protect: deleteImage, Return response: None
Thread-34::INFO::2014-09-26 23:00:45,883::logUtils::48::dispatcher::(wrapper) Run and protect: deleteImage(sdUUID='ff559f46-c495-4f6b-901c-2a624042a050', spUUID='b86b687a-d073-497a-ac8a-249025419a3e', imgUUID='9288c7e8-43fd-4174-9889-634fb2e0437a', postZero='false', force='false')
Thread-34::INFO::2014-09-26 23:00:46,338::logUtils::51::dispatcher::(wrapper) Run and protect: deleteImage, Return response: None
Thread-34::INFO::2014-09-26 23:07:24,205::logUtils::48::dispatcher::(wrapper) Run and protect: deleteImage(sdUUID='ff559f46-c495-4f6b-901c-2a624042a050', spUUID='b86b687a-d073-497a-ac8a-249025419a3e', imgUUID='60e28edc-4bb3-4f87-b6a2-5dc45f1a0971', postZero='false', force='false')
Thread-34::INFO::2014-09-26 23:07:24,659::logUtils::51::dispatcher::(wrapper) Run and protect: deleteImage, Return response: None

Since vdsm was not asked to delete the images, this is clearly an engine bug.

I did not test it with older engine version, but I'm sure this is a regression.
I'm creating and deleting vms regularly while verifying, and I would notice
if my storage domain loose free space.

Comment 1 Nir Soffer 2014-09-26 21:02:40 UTC
Created attachment 941721 [details]
vdsm log

Comment 2 Allon Mureinik 2014-09-28 08:22:27 UTC
Idan, I think this is another instance of a bug you're already working on. Please verify (and solve :-))

Comment 3 Sandro Bonazzola 2014-10-17 12:38:25 UTC
oVirt 3.5 has been released and should include the fix for this issue.


Note You need to log in before you can comment on or make changes to this bug.