Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 921489

Summary: engine: we allow the user to create template/export a vm which disks are illegal and after the action is done we change the disk status back to OK.
Product: Red Hat Enterprise Virtualization Manager Reporter: Dafna Ron <dron>
Component: ovirt-engineAssignee: Liron Aravot <laravot>
Status: CLOSED CURRENTRELEASE QA Contact: Dafna Ron <dron>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.2.0CC: abaron, acathrow, amureini, dyasny, iheim, lpeer, Rhev-m-bugs, scohen, yeylon, ykaul
Target Milestone: ---Keywords: Regression
Target Release: 3.2.0Flags: abaron: Triaged+
Hardware: x86_64   
OS: Linux   
Whiteboard: storage
Fixed In Version: sf12 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-06-11 09:14:07 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
logs none

Description Dafna Ron 2013-03-14 10:21:12 UTC
Created attachment 709968 [details]
logs

Description of problem:

after live storage migration problems and trying to remove a vm with wrong mapping, the vm snapshot changed to broken and the disks changed to illegal. 
I was able to create a template from the vm and once the creation was done the vm's disk status changed back to OK. 

Version-Release number of selected component (if applicable):

sf10

How reproducible:

unknown

Steps to Reproduce:
1. create 3 iscsi domains 100GB each
2. create 20 vm's with use pool from a template with 15GB thin provision disk on one domain and create 2 vm's which are clones of the templates on a second domain
3. run the vms and move them all to the 3ed domain. 
4. after we fail on some of the vm because of storage space, stop the vms and try to move some of the vms to a different storage (including the clones). 
5. try to remove the snapshots of the vms you moved
6. try to remove the vms
7. after the vm disk becomes illegal -> create a template
  
Actual results:

I was not able to remove the snapshot and it became broken
I was unable to remove the vm and the disk became illegal
I was able to create a template from the vm and once I did that the disk changes status to OK


Expected results:

1. we should not be able to create a template or export the vm if the disks are illegal
2. if disks are illegal they should not change to OK

Additional info:

the vm has a disk and a snapshot 


[root@Dafna-32 ~]# psql -U postgres engine -c "select image_guid,imagestatus,storage_name,vm_names,storage_id,vm_snapshot_id from images_storage_domain_view;" |grep clone1
             image_guid              | imagestatus | storage_name |      vm_names      |              storage_id              |            vm_snapshot_id
--------------------------------------+-------------+--------------+--------------------+--------------------------------------+--------------------------------------
 f5fc7d8a-7949-41dd-a979-5ec21f92cb07 |           2 | Dafna-SF9-01 | clone1             | 52e4917b-10bc-4b03-9b82-a2e15732c15b | 96afd8a1-1ec7-41f9-89e3-517e1f025179
 8233993e-a451-4132-a822-3c1a908ab684 |           4 | Dafna-SF9-01 | clone1             | 52e4917b-10bc-4b03-9b82-a2e15732c15b | 8c559dd4-2d34-409c-a2d3-d41807d7de99


####after create template:

root@Dafna-32 ~]# psql -U postgres engine -c "select image_guid,imagestatus,storage_name,vm_names,storage_id,vm_snapshot_id from images_storage_domain_view;" |grep clone1
             image_guid              | imagestatus | storage_name |      vm_names      |              storage_id              |            vm_snapshot_id
--------------------------------------+-------------+--------------+--------------------+--------------------------------------+--------------------------------------
 f5fc7d8a-7949-41dd-a979-5ec21f92cb07 |           2 | Dafna-SF9-01 | clone1             | 52e4917b-10bc-4b03-9b82-a2e15732c15b | 96afd8a1-1ec7-41f9-89e3-517e1f025179
 8233993e-a451-4132-a822-3c1a908ab684 |           1 | Dafna-SF9-01 | clone1             | 52e4917b-10bc-4b03-9b82-a2e15732c15b | 8c559dd4-2d34-409c-a2d3-d41807d7de99

####try to delete the vm again:

root@Dafna-32 ~]# psql -U postgres engine -c "select image_guid,imagestatus,storage_name,vm_names,storage_id,vm_snapshot_id from images_storage_domain_view;" |grep clone1
             image_guid              | imagestatus | storage_name |      vm_names      |              storage_id              |            vm_snapshot_id
--------------------------------------+-------------+--------------+--------------------+--------------------------------------+--------------------------------------
 f5fc7d8a-7949-41dd-a979-5ec21f92cb07 |           2 | Dafna-SF9-01 | clone1             | 52e4917b-10bc-4b03-9b82-a2e15732c15b | 96afd8a1-1ec7-41f9-89e3-517e1f025179
 8233993e-a451-4132-a822-3c1a908ab684 |           4 | Dafna-SF9-01 | clone1             | 52e4917b-10bc-4b03-9b82-a2e15732c15b | 8c559dd4-2d34-409c-a2d3-d41807d7de99


###looking at the images in the images table:

[root@Dafna-32 ~]# psql -U postgres engine -c "select image_guid,imagestatus,vm_snapshot_id,image_group_id from images;" |grep f5fc7d8a-7949-41dd-a979-5ec21f92cb07
              image_guid              | imagestatus |            vm_snapshot_id            |            image_group_id
--------------------------------------+-------------+--------------------------------------+--------------------------------------
 f5fc7d8a-7949-41dd-a979-5ec21f92cb07 |           2 | 96afd8a1-1ec7-41f9-89e3-517e1f025179 | 49582f3b-4acf-4f39-b8b7-f60b1477e995
 8233993e-a451-4132-a822-3c1a908ab684 |           4 | 8c559dd4-2d34-409c-a2d3-d41807d7de99 | 49582f3b-4acf-4f39-b8b7-f60b1477e995


###from the snapshot table:

[root@Dafna-32 ~]# psql -U postgres engine -c "select vm_id,snapshot_id,status,description,snapshot_type from snapshots;"
               vm_id                 |             snapshot_id              | status |                description                | snapshot_type
--------------------------------------+--------------------------------------+--------+-------------------------------------------+---------------
 f339a37d-4bc9-43b8-9179-d879ebfa0360 | 96afd8a1-1ec7-41f9-89e3-517e1f025179 | BROKEN | Auto-generated for Live Storage Migration | REGULAR
 f339a37d-4bc9-43b8-9179-d879ebfa0360 | 8c559dd4-2d34-409c-a2d3-d41807d7de99 | OK     | Active VM                                 | ACTIVE

Comment 2 Dafna Ron 2013-04-08 16:02:42 UTC
tested on sf13. 
I was unable to create a template or export the vm with appropriate message but I was able to perform other actions. 
I opened a new bug for new issues: 949624 and moving this one to verify

Comment 3 Itamar Heim 2013-06-11 09:14:07 UTC
3.2 has been released

Comment 4 Itamar Heim 2013-06-11 09:40:14 UTC
3.2 has been released