Bug 1433052 - unexpected qcow_version v2 on 2nd disk after snapshot on a VM created from template
Summary: unexpected qcow_version v2 on 2nd disk after snapshot on a VM created from te...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.1.1.4
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.1.2
: 4.1.2
Assignee: Maor
QA Contact: Avihai
URL:
Whiteboard:
: 1437533 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-03-16 16:21 UTC by Avihai
Modified: 2021-03-11 16:02 UTC (History)
6 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-05-23 08:12:40 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.1+


Attachments (Terms of Use)
engine , vdsm , restAPI get (506.94 KB, application/x-gzip)
2017-03-16 16:21 UTC, Avihai
no flags Details
engine , vdsm , restAPI get (767.94 KB, application/x-gzip)
2017-03-19 06:03 UTC, Avihai
no flags Details
requested_DB_info (2.40 KB, text/plain)
2017-03-19 08:17 UTC, Avihai
no flags Details
engine & vdsm all hosts logs (2.73 MB, application/x-gzip)
2017-03-20 15:52 UTC, Avihai
no flags Details
repo3 logs (1.63 MB, application/x-gzip)
2017-03-21 05:36 UTC, Avihai
no flags Details
engine , vdsm , restAPI get (1.48 MB, application/x-gzip)
2017-04-19 12:09 UTC, Avihai
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 74401 0 master MERGED core: getQemuImageInfo should not call teardown on live snapshot. 2017-03-23 12:46:51 UTC
oVirt gerrit 74570 0 ovirt-engine-4.1 MERGED core: Refactor qcow compat update. 2017-03-24 11:05:26 UTC
oVirt gerrit 74571 0 ovirt-engine-4.1 MERGED core: getQemuImageInfo should not call teardown on live snapshot. 2017-03-24 11:06:43 UTC

Description Avihai 2017-03-16 16:21:57 UTC
Created attachment 1263750 [details]
engine , vdsm , restAPI get

Description of problem:
After snapshot on VM create from template the qcow version on one of the 2nd disk on a VM is with unexpected qcow_version v2 on a V4 DC . 


Version-Release number of selected component (if applicable):
Engine - ovirt-engine-4.1.1.4-0.1.el7.noarch
VDSM - 4.19.6-1

How reproducible:
100%

Steps to Reproduce:
1. Use existing V4 DC & cluster
2. Create a VM1 (from glance template) that comes with a 10G bootable (rhel 7.3 OS) thin disk (disk1)
3. Add an a 1G thin disk2 to VM1
4. Create a new template of the previously created VM1

5. Create a new VM2 from the new template (should be created with disks 3 & 4 matching disks 1&2 of VM1) 

6.verify that all the disk images are created as qcow version 1.0
-> ALL 4 DISKS ARE in Qcow_v3 ! 

7. Create snapshot do VM2 -> disk4 of VM2 change to qcow_v2 ! 

Snapshot creation was done at 2017-03-16 17:52:58,429 something changed disk4 of VM2 in around that time.

Actual results:
qcow version on one of the disks (disk3&4) on VM2 that was created from the new template is with unexpected qcow_version (v2) on a V4 DC . 
All the rest of the disks on VM1 & VM2 are qcow_v3 as expected.

VM1 disk1 (10G bootable) - qcow_v3
VM1 disk2 (1G)           - qcow v3

VM2 disk3 (10G bootable) - qcow_v3
VM2 disk4 (1G)           - qcow v2 ! 


VM2 Disk4 details :
alias = vm_TestCase18340_REST_ISCS_1615021205_Disk_0 , 
id= da415724-d246-4c54-97aa-b10cc81156eb

Expected results:
As DC & SD are V4 all created disks& snapshots should be qcow_v3.

Additional info:
Creating a snapshot on a VM with 2 disks does not reproduce this issue , only this flow does .
Something with combining creating VM with more than 1 disk from template + create snapshot cause the 2nd disk to change qcow version.

disksnapshots are in qcow_v3 as expected .

Comment 1 Avihai 2017-03-16 16:23:28 UTC
This test flow matches TestCase18340 (found in automation run)

Comment 2 Tal Nisan 2017-03-16 17:07:06 UTC
Avihai, did the test use to pass before?

Comment 3 Avihai 2017-03-16 17:29:18 UTC
(In reply to Tal Nisan from comment #2)
> Avihai, did the test use to pass before?

Hi Tal , its a fresh new test just implemented in automation .

Comment 4 Maor 2017-03-18 21:58:35 UTC
Hi Avihai,

It seems like the logs are from 2017-03-15 so I can't seem to find any reference to the bug.
Keep in mind that the process of updating the QCOW version is using V2 by default and then ask VDSM for the QCOW version of the volume. If, for any reason, there is an error that occurs in VDSM, the engine will keep the default configuration V2.
Can you please attach the logs again which include the operation?

Comment 5 Avihai 2017-03-19 06:03:22 UTC
Created attachment 1264532 [details]
engine , vdsm , restAPI get

(In reply to Maor from comment #4)
> Hi Avihai,
> 
> It seems like the logs are from 2017-03-15 so I can't seem to find any
> reference to the bug.
> Keep in mind that the process of updating the QCOW version is using V2 by
> default and then ask VDSM for the QCOW version of the volume. If, for any
> reason, there is an error that occurs in VDSM, the engine will keep the
> default configuration V2.
> Can you please attach the logs again which include the operation?


Correct logs reattached .


Snapshot creation was done at 2017-03-16 17:52:58,429 something changed disk4 of VM2 back to qcow_v2 in around that time.

Comment 6 Maor 2017-03-19 07:03:53 UTC
Can you please attach the logs of host_mixed_1, seems that the operation of getQemuImageInfo was done on that host.
Also can you please share the output of the DB so I would know which volume id was with V2:
   SELECT image_guid,volume_type,volume_format,qcow_compat FROM images;


From the engine logs it looks like there was an attempt to do amend but there was a failure:

2017-03-16 17:52:59,619+02 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand] (default task-4) [3ff3ee2e] START, CreateSnapshotVDSCommand( CreateSnapshotVDSCommandParameters:{runAsync='true', storagePoolId='978f3ace-33cd-4310-aca4-6d59c92007d6', ignoreFailoverLimit='false', storageDomainId='09f429b1-2157-4f4a-bfab-787bb830a825', imageGroupId='26fbeb8f-33f5-4153-9bf8-99c75e883167', imageSizeInBytes='10737418240', volumeFormat='COW', newImageId='7725f841-6932-4216-9604-c4f8b8af1258', newImageDescription='', imageInitialSizeInBytes='0', imageId='d556750e-31b2-4f73-a4e1-75393fbd95e4', sourceImageGroupId='26fbeb8f-33f5-4153-9bf8-99c75e883167'}), log id: 242a109f
2017-03-16 17:53:00,009+02 ERROR [org.ovirt.engine.core.bll.storage.disk.image.AmendImageGroupVolumesCommand] (DefaultQuartzScheduler1) [39025250] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.AmendImageGroupVolumesCommand' with failure.


There was also a storage issue which happened during the teardown:

2017-03-16 17:53:41,200+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (DefaultQuartzScheduler3) [5e274379] START, TeardownImageVDSCommand(HostName = host_mixed_1, ImageActionsVDSCommandParameters:{runAsync='true', hostId='552c7677-922b-4964-9446-12079565b3b3'}), log id: 7faadb80
2017-03-16 17:54:01,475+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler3) [5e274379] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM host_mixed_1 command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 09f429b1-2157-4f4a-bfab-787bb830a825/abe99255-06ae-41bf-b83b-f3e21fddb0c1 in use.\', \'  Logical volume 09f429b1-2157-4f4a-bfab-787bb830a825/eb7bf3c9-2881-403d-8611-f83df79679c0 in use.\']\\n09f429b1-2157-4f4a-bfab-787bb830a825/[\'abe99255-06ae-41bf-b83b-f3e21fddb0c1\', \'eb7bf3c9-2881-403d-8611-f83df79679c0\']",)',)
2017-03-16 17:54:01,475+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (DefaultQuartzScheduler3) [5e274379] Command 'TeardownImageVDSCommand(HostName = host_mixed_1, ImageActionsVDSCommandParameters:{runAsync='true', hostId='552c7677-922b-4964-9446-12079565b3b3'})' execution failed: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 09f429b1-2157-4f4a-bfab-787bb830a825/abe99255-06ae-41bf-b83b-f3e21fddb0c1 in use.\', \'  Logical volume 09f429b1-2157-4f4a-bfab-787bb830a825/eb7bf3c9-2881-403d-8611-f83df79679c0 in use.\']\\n09f429b1-2157-4f4a-bfab-787bb830a825/[\'abe99255-06ae-41bf-b83b-f3e21fddb0c1\', \'eb7bf3c9-2881-403d-8611-f83df79679c0\']",)',)

Comment 7 Avihai 2017-03-19 08:16:51 UTC
(In reply to Maor from comment #6)
> Can you please attach the logs of host_mixed_1, seems that the operation of
> getQemuImageInfo was done on that host.

I already attached the VDSM log of host_mixed_1 as its the only host in the relevant  DC , do you need other logs from it ?


> Also can you please share the output of the DB so I would know which volume
> id was with V2:
>    SELECT image_guid,volume_type,volume_format,qcow_compat FROM images;

Added a new attachment with the requested content .( take in accout I extracted this now & I ran other tests on this setup since I opened this bug) 

> From the engine logs it looks like there was an attempt to do amend but
> there was a failure:
> 
> 2017-03-16 17:52:59,619+02 INFO 
> [org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
> (default task-4) [3ff3ee2e] START, CreateSnapshotVDSCommand(
> CreateSnapshotVDSCommandParameters:{runAsync='true',
> storagePoolId='978f3ace-33cd-4310-aca4-6d59c92007d6',
> ignoreFailoverLimit='false',
> storageDomainId='09f429b1-2157-4f4a-bfab-787bb830a825',
> imageGroupId='26fbeb8f-33f5-4153-9bf8-99c75e883167',
> imageSizeInBytes='10737418240', volumeFormat='COW',
> newImageId='7725f841-6932-4216-9604-c4f8b8af1258', newImageDescription='',
> imageInitialSizeInBytes='0', imageId='d556750e-31b2-4f73-a4e1-75393fbd95e4',
> sourceImageGroupId='26fbeb8f-33f5-4153-9bf8-99c75e883167'}), log id: 242a109f
> 2017-03-16 17:53:00,009+02 ERROR
> [org.ovirt.engine.core.bll.storage.disk.image.AmendImageGroupVolumesCommand]
> (DefaultQuartzScheduler1) [39025250] Ending command
> 'org.ovirt.engine.core.bll.storage.disk.image.AmendImageGroupVolumesCommand'
> with failure.
> 
> 
> There was also a storage issue which happened during the teardown:
> 
> 2017-03-16 17:53:41,200+02 INFO 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand]
> (DefaultQuartzScheduler3) [5e274379] START, TeardownImageVDSCommand(HostName
> = host_mixed_1, ImageActionsVDSCommandParameters:{runAsync='true',
> hostId='552c7677-922b-4964-9446-12079565b3b3'}), log id: 7faadb80
> 2017-03-16 17:54:01,475+02 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (DefaultQuartzScheduler3) [5e274379] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: null,
> Custom Event ID: -1, Message: VDSM host_mixed_1 command TeardownImageVDS
> failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5
> [] [\'  Logical volume
> 09f429b1-2157-4f4a-bfab-787bb830a825/abe99255-06ae-41bf-b83b-f3e21fddb0c1 in
> use.\', \'  Logical volume
> 09f429b1-2157-4f4a-bfab-787bb830a825/eb7bf3c9-2881-403d-8611-f83df79679c0 in
> use.\']\\n09f429b1-2157-4f4a-bfab-787bb830a825/[\'abe99255-06ae-41bf-b83b-
> f3e21fddb0c1\', \'eb7bf3c9-2881-403d-8611-f83df79679c0\']",)',)
> 2017-03-16 17:54:01,475+02 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand]
> (DefaultQuartzScheduler3) [5e274379] Command
> 'TeardownImageVDSCommand(HostName = host_mixed_1,
> ImageActionsVDSCommandParameters:{runAsync='true',
> hostId='552c7677-922b-4964-9446-12079565b3b3'})' execution failed:
> VDSGenericException: VDSErrorException: Failed in vdscommand to
> TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General
> Storage Exception: ("5 [] [\'  Logical volume
> 09f429b1-2157-4f4a-bfab-787bb830a825/abe99255-06ae-41bf-b83b-f3e21fddb0c1 in
> use.\', \'  Logical volume
> 09f429b1-2157-4f4a-bfab-787bb830a825/eb7bf3c9-2881-403d-8611-f83df79679c0 in
> use.\']\\n09f429b1-2157-4f4a-bfab-787bb830a825/[\'abe99255-06ae-41bf-b83b-
> f3e21fddb0c1\', \'eb7bf3c9-2881-403d-8611-f83df79679c0\']",)',)

Comment 8 Avihai 2017-03-19 08:17:27 UTC
Created attachment 1264560 [details]
requested_DB_info

Comment 9 Avihai 2017-03-20 15:51:12 UTC
Another reproduction on latest engine 4.1.1.5 with same scenario .

logs of all vdsm hosts + engine attached.

Get API from storage domain :
<name>vm_TestCase18340_REST_ISCS_2017191517_Disk_0</name>
<description>rhel7.3_rhv4.1_guest_disk (9c3e573)</description>
<link href="/ovirt-engine/api/disks/f1a85c77-735a-4b10-88a2-a395c980ccfd/permissions" rel="permissions"/>
<link href="/ovirt-engine/api/disks/f1a85c77-735a-4b10-88a2-a395c980ccfd/statistics" rel="statistics"/>
<actual_size>1073741824</actual_size>
<alias>vm_TestCase18340_REST_ISCS_2017191517_Disk_0</alias>
<format>cow</format>
<image_id>b1521147-7192-4919-beb0-d6f7483ffdc5</image_id>
<propagate_errors>false</propagate_errors>
<provisioned_size>10737418240</provisioned_size>
<qcow_version>qcow2_v2</qcow_version>

Timeline :
2017-03-20 17:39:41,768 - MainThread - snapshots - INFO - Property 'snapshot->description' has correct value: snap_test_create_new_thin_vm_2017391237
2017-03-20 17:39:41,772 - MainThread - art.ll_lib.jobs - INFO - Waiting for jobs ['Creating VM Snapshot .* for VM .*']
2017-03-20 17:39:41,834 - MainThread - art.ll_lib.jobs - INFO - Active jobs: ['Creating VM Snapshot snap_test_create_new_thin_vm_2017391237 for VM vm_TestCase18340_REST_ISCS_2017191489']
2017-03-20 17:39:51,152 - MainThread - art.ll_lib.jobs - DEBUG - Skipping duplicate log-messages...
2017-03-20 17:40:42,014 - MainThread - art.ll_lib.jobs - INFO - Active jobs: []
2017-03-20 17:40:42,074 - MainThread - art.ll_lib.jobs - INFO - JOB 'Creating VM Snapshot snap_test_create_new_thin_vm_2017391237 for VM vm_TestCase18340_REST_ISCS_2017191489' TOOK 64.132 seconds
2017-03-20 17:40:42,074 - MainThread - art.ll_lib.jobs - INFO - All jobs are gone
2017-03-20 17:40:43,224 - MainThread - art.ll_lib.vms - INFO - Snapshots found: ['Active VM', 'snap_test_create_new_thin_vm_2017391237']
2017-03-20 17:40:44,512 - MainThread - snapshots - INFO - snapshot status is 'ok'
2017-03-20 17:40:45,266 - MainThread - art.ll_lib.vms - DEBUG - Skipping duplicate log-messages...
True
(Pdb) ll_jobs.wait_for_jobs([config.JOB_CREATE_SNAPSHOT])
2017-03-20 17:40:55,706 - MainThread - art.ll_lib.jobs - INFO - Waiting for jobs ['Creating VM Snapshot .* for VM .*']
2017-03-20 17:40:55,775 - MainThread - art.ll_lib.jobs - INFO - Active jobs: []
2017-03-20 17:40:55,837 - MainThread - art.ll_lib.jobs - INFO - JOB 'Creating VM Snapshot snap_test_create_new_thin_vm_2017391237 for VM vm_TestCase18340_REST_ISCS_2017191489' TOOK 64.132 seconds
2017-03-20 17:40:55,838 - MainThread - art.ll_lib.jobs - INFO - All jobs are gone
(Pdb) helpers.verify_qcow_snapdisks_version_sd(self.storage_domain, expected_qcow_version=config.QCOW_V3)
*** AssertionError: Snapshot disk disk_TestCase18340_REST_ISCS_2017194189 has unexpected cow ver qcow2_v2

Comment 10 Avihai 2017-03-20 15:52:05 UTC
Created attachment 1264784 [details]
engine & vdsm all hosts logs

Comment 11 Maor 2017-03-20 16:48:33 UTC
We can see the same exception as before in the teardown:

2017-03-20 17:40:37,372+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (DefaultQuartzScheduler7) [17a242a6] Command 'TeardownImageVDSCommand(HostName = host_mixed_1, ImageActionsVDSCommandParameters:{runAsync='true', hostId='552c7677-922b-4964-9446-12079565b3b3'})' execution failed: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume e14fa1ae-21fa-4d56-93a1-ec000713d602/a839f366-e4b1-47fd-baac-0958a81d3cc5 in use.\', \'  Logical volume e14fa1ae-21fa-4d56-93a1-ec000713d602/b1521147-7192-4919-beb0-d6f7483ffdc5 in use.\']\\ne14fa1ae-21fa-4d56-93a1-ec000713d602/[\'a839f366-e4b1-47fd-baac-0958a81d3cc5\', \'b1521147-7192-4919-beb0-d6f7483ffdc


In VDSM we can see the error also:

2017-03-20 17:40:36,316+0200 DEBUG (jsonrpc/2) [storage.Misc.excCmd] FAILED: <err> = '  Logical volume e14fa1ae-21fa-4d56-93a1-ec000713d602/a839f366-e4b1-47fd-baac-0958a81d3cc5 in use.\n  Logical volume e14fa1ae-21fa-4d56-93a1-ec000713d602/b1521147-7192-4919-beb0-d6f7483ffdc5 in use.\n'; <rc> = 5 (commands:93)
2017-03-20 17:40:36,317+0200 ERROR (jsonrpc/2) [storage.TaskManager.Task] (Task='eb14ca70-8694-43df-ac84-e7a24a5a9823') Unexpected error (task:870)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 877, in _run
    return fn(*args, **kargs)
  File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 3194, in teardownImage
    dom.deactivateImage(imgUUID)
  File "/usr/share/vdsm/storage/blockSD.py", line 1278, in deactivateImage
    lvm.deactivateLVs(self.sdUUID, volUUIDs)
  File "/usr/share/vdsm/storage/lvm.py", line 1305, in deactivateLVs
    _setLVAvailability(vgName, toDeactivate, "n")
  File "/usr/share/vdsm/storage/lvm.py", line 844, in _setLVAvailability
    raise error(str(e))
CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume e14fa1ae-21fa-4d56-93a1-ec000713d602/a839f366-e4b1-47fd-baac-0958a81d3cc5 in use.\', \'  Logical volume e14fa1ae-21fa-4d56-93a1-ec000713d602/b1521147-7192-4919-beb0-d6f7483ffdc5 in use.\']\\ne14fa1ae-21fa-4d56-93a1-ec000713d602/[\'a839f366-e4b1-47fd-baac-0958a81d3cc5\', \'b1521147-7192-4919-beb0-d6f7483ffdc5\']",)',)

For some reason this LV is in use and can not be deactivated.

Comment 12 Maor 2017-03-20 17:47:03 UTC
Some operation seems to use the LV for some reason.
Does it constantly fails with this exception?
If so, can you also make it fail manually without the automatic test since it seems that in my env it works as expected.
There could be some operation that we missed to mention in the reproduce steps

Comment 13 Avihai 2017-03-21 05:34:33 UTC
(In reply to Maor from comment #12)
> Some operation seems to use the LV for some reason.
> Does it constantly fails with this exception?

Yes .

> If so, can you also make it fail manually without the automatic test since
> it seems that in my env it works as expected.
> There could be some operation that we missed to mention in the reproduce
> steps

I reproduced it again this time manually as requested with same procedure .

When I check qcow_version I made sure that :
All Disks are up - Not locked .
Snapshot is up  - Not locked .

I also attached storage domain disks before & after snapshot was taken .


Steps to Reproduce:
1. Use existing V4 DC & cluster
2. Create a VM1 (from glance template) that comes with a 10G bootable (rhel 7.3 OS) thin disk (disk1)
3. Add an a 1G thin disk2 to VM1
4. Create a new template of the previously created VM1

5. Create a new VM2 from the new template (should be created with disks 3 & 4 matching disks 1&2 of VM1) 

6.verify that all the disk images are created as qcow version 1.1
-> ALL 4 DISKS ARE in Qcow_v3 ! 

7. Create snapshot do VM2 -> snapshot up .

All Disks are up - Not locked .
Snapshot is up  - Not locked .

disk4 of VM2 change to qcow_v2 !

disk4 ID = 4ef03c91-f2ca-4165-b253-86502748becd . 

Snapshot creation was completed at Mar 21, 2017 6:58:13 .

From REST path:
https://storage-ge-04.scl.lab.tlv.redhat.com/ovirt-engine/api/storagedomains/e14fa1ae-21fa-4d56-93a1-ec000713d602/disks/4ef03c91-f2ca-4165-b253-86502748becd) :

<disk href="/ovirt-engine/api/disks/4ef03c91-f2ca-4165-b253-86502748becd" id="4ef03c91-f2ca-4165-b253-86502748becd">
<actions>
<link href="/ovirt-engine/api/disks/4ef03c91-f2ca-4165-b253-86502748becd/sparsify" rel="sparsify"/>
<link href="/ovirt-engine/api/disks/4ef03c91-f2ca-4165-b253-86502748becd/export" rel="export"/>
<link href="/ovirt-engine/api/disks/4ef03c91-f2ca-4165-b253-86502748becd/move" rel="move"/>
<link href="/ovirt-engine/api/disks/4ef03c91-f2ca-4165-b253-86502748becd/copy" rel="copy"/>
</actions>
<name>test_vm_Disk1</name>
<description>disk2_1G</description>
<link href="/ovirt-engine/api/disks/4ef03c91-f2ca-4165-b253-86502748becd/permissions" rel="permissions"/>
<link href="/ovirt-engine/api/disks/4ef03c91-f2ca-4165-b253-86502748becd/statistics" rel="statistics"/>
<vms>
<vm id="abc8f33c-539d-477d-82fe-2f3110f4d253"/>
</vms>
<actual_size>1073741824</actual_size>
<alias>test_vm_Disk1</alias>
<format>cow</format>
<image_id>1ed9f990-5716-4055-9816-6cdf1d2133ae</image_id>
<propagate_errors>false</propagate_errors>
<provisioned_size>1073741824</provisioned_size>
<qcow_version>qcow2_v2</qcow_version>
<shareable>false</shareable>
<sparse>true</sparse>
<status>ok</status>
<storage_type>image</storage_type>
<wipe_after_delete>false</wipe_after_delete>
<disk_profile href="/ovirt-engine/api/diskprofiles/d6ba11ce-aae8-4dee-bf2b-629e4ec6fd26" id="d6ba11ce-aae8-4dee-bf2b-629e4ec6fd26"/>
<quota id="18ad6017-5ec3-4318-9f40-29c25c7718b4"/>
<storage_domain href="/ovirt-engine/api/storagedomains/e14fa1ae-21fa-4d56-93a1-ec000713d602" id="e14fa1ae-21fa-4d56-93a1-ec000713d602"/>
<storage_domains>
<storage_domain href="/ovirt-engine/api/storagedomains/e14fa1ae-21fa-4d56-93a1-ec000713d602" id="e14fa1ae-21fa-4d56-93a1-ec000713d602"/>
</storage_domains>
</disk>

Comment 14 Avihai 2017-03-21 05:36:00 UTC
Created attachment 1264906 [details]
repo3 logs

Comment 15 Maor 2017-03-21 12:04:30 UTC
The bug is with live snapshot.
Live snapshot will set it as QCOW2_V2 since teardown will fail as the VM is using the disk.

Comment 16 Maor 2017-03-22 10:30:44 UTC
Avihai,

Can u please confirm that the failing automatic test was creating a snapshot on a running VM.
I want to make sure that this patch solves this issue and we don't miss anything else.

Comment 17 Avihai 2017-03-22 18:03:43 UTC
(In reply to Maor from comment #16)
> Avihai,
> 
> Can u please confirm that the failing automatic test was creating a snapshot
> on a running VM.
> I want to make sure that this patch solves this issue and we don't miss
> anything else.

Confirmed , run automation & manual same scenario & the trigger is indeed creating a snapshot on a running VM .

Scenario is :
Steps to Reproduce:
1. Use existing V4 DC & cluster
2. Create a VM1 (from glance template) that comes with a 10G bootable (rhel 7.3 OS) thin disk (disk1)
3. Add an a 1G thin disk2 to VM1
4. Create a new template of the previously created VM1
5. Create a new VM2 from the new template (should be created with disks 3 & 4 matching disks 1&2 of VM1) 
6.verify that all the disk images are created as qcow version 1.1
7. Activate VM
8. Create snapshot do VM2 -> issue encountered

Comment 18 Liron Aravot 2017-03-30 14:48:39 UTC
*** Bug 1437533 has been marked as a duplicate of this bug. ***

Comment 19 Maor 2017-04-09 12:24:13 UTC
*** Bug 1440472 has been marked as a duplicate of this bug. ***

Comment 20 Avihai 2017-04-19 12:08:00 UTC
Description of problem:
The creation of snapshot after  DC upgrade(4.0->4.1) is faulty.
As a result, qcow_version is not updated to qcow2_v3 as expected.

Version-Release number of selected component (if applicable):
Engine:
ovirt-engine-4.1.1.8-0.1.el7.noarch

VDSM:
4.19.10.1

How reproducible:
100%

Steps to Reproduce:
1. Create DC + cluster on v3
2. Create a VM with thin disk & start VM
3. Write data (file1) and Create snapshots s1
4. Write data (file2) and Create snapshots s2
3. Upgrade the cluster+DC from v3 to v4
4. Verify that the snapshot images are version 0.10
5. Create snapshot s3 & verify the snapshot image is version 0.10
6. Create snapshot s4 & verify the snapshot image is version 1.1

Actual results:
After upgrade creating a snapshot if followed by error event VDSM host_mixed_1 command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception...
Issue Started when I created s3 & s4 snapshots after upgrade to DC.

2 Effect were seen:
1) New Active VM (created during s3 creation )image volumes(snapshot disks) failed to update which caused them to remain qcow_v2 instead of the expected qcow_v3.
2) Creating s4 image volumes(snapshot disks) also failed to update which caused them to remain qcow_v2 instead of the expected qcow_v3.


Expected results:
After upgrade creating a snapshot should not produce errors.
qcow version of Active VM after 1st snapshot(s3) disks should be qcow_v3. 
2nd snapshot (s4) snapshot disks should be qcow_v3. 


Additional info:
This issue is seen only in iscsi storage domain .(NFS & gluster looks fine)

Time flow of creation:
2017-04-19 13:57:39,104 - MainThread - root - INFO - Description snap_test_live_merge_old_ver_19135739083: Add snapshot to VM vm_TestCase18346_REST_ISCS_1913513136 with {'wait': True}
2017-04-19 14:01:00,026 - MainThread - art.ll_lib.jobs - INFO - JOB 'Creating VM Snapshot snap_test_live_merge_old_ver_19135739083 for VM vm_TestCase18346_REST_ISCS_1913513136' TOOK 196.987 seconds
2017-04-19 14:01:08,094 - MainThread - root - INFO - Description snap_test_live_merge_old_ver_19140108084: Add snapshot to VM vm_TestCase18346_REST_ISCS_1913513136 with {'wait': True}
2017-04-19 14:05:06,220 - MainThread - art.ll_lib.jobs - INFO - JOB 'Creating VM Snapshot snap_test_live_merge_old_ver_19140108084 for VM vm_TestCase18346_REST_ISCS_1913513136' TOOK 234.691 seconds

Relevant snapshot name (snapshot s4) = snap_test_live_merge_old_ver_19140108084

Relevant snapshot disk id's :
a05a9d7f-3c60-4781-936f-cb72346349e7
e611febd-20a0-4764-a33e-44ac14b635f4

Active VM snpshot disks id's:
efe1a38a-5127-43b1-a1f0-4cdb1dd76744
f60979a7-6632-41c9-8015-cd72d5ee101e

Engine Errors:
2017-04-19 14:00:10,276+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler3) [4b091013] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM host_mixed_1 command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/2a29823b-f8d5-4643-b193-d9ddf1105b35 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/35d1fd45-241d-4b5a-835f-ae14eafd37fd in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/0c953587-1c78-488b-bdec-e42f0e256559 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/a05a9d7f-3c60-4781-936f-cb72346349e7 in use.\']\\na792c4e9-f42e-4ecc-bde9-266bdc23ab36/[\'a05a9d7f-3c60-4781-936f-cb72346349e7\', \'2a29823b-f8d5-4643-b193-d9ddf1105b35\', \'0c953587-1c78-488b-bdec-e42f0e256559\', \'35d1fd45-241d-4b5a-835f-ae14eafd37fd\']",)',)
2017-04-19 14:00:10,276+03 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (DefaultQuartzScheduler3) [4b091013] Command 'TeardownImageVDSCommand(HostName = host_mixed_1, ImageActionsVDSCommandParameters:{runAsync='true', hostId='0ba63f2d-06b2-4392-82ce-7abd60614b2a'})' execution failed: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/2a29823b-f8d5-4643-b193-d9ddf1105b35 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/35d1fd45-241d-4b5a-835f-ae14eafd37fd in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/0c953587-1c78-488b-bdec-e42f0e256559 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/a05a9d7f-3c60-4781-936f-cb72346349e7 in use.\']\\na792c4e9-f42e-4ecc-bde9-266bdc23ab36/[\'a05a9d7f-3c60-4781-936f-cb72346349e7\', \'2a29823b-f8d5-4643-b193-d9ddf1105b35\', \'0c953587-1c78-488b-bdec-e42f0e256559\', \'35d1fd45-241d-4b5a-835f-ae14eafd37fd\']",)',)
2017-04-19 14:00:10,277+03 ERROR [org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand] (DefaultQuartzScheduler3) [4b091013] Unable to update the image info for image 'a05a9d7f-3c60-4781-936f-cb72346349e7' (image group: '744b0c97-f982-4839-b1e3-0dde1bb8b334') on domain 'a792c4e9-f42e-4ecc-bde9-266bdc23ab36'
2017-04-19 14:00:53,807+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler3) [656e88e4] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM host_mixed_1 command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/e5633304-2fa1-49f6-ab37-dda69e8b1767 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/93789814-fbad-4e01-96a2-d9da02b271e0 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/a09a5b74-464e-4b6b-89e3-a36373f48a10 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/e611febd-20a0-4764-a33e-44ac14b635f4 in use.\']\\na792c4e9-f42e-4ecc-bde9-266bdc23ab36/[\'a09a5b74-464e-4b6b-89e3-a36373f48a10\', \'93789814-fbad-4e01-96a2-d9da02b271e0\', \'e611febd-20a0-4764-a33e-44ac14b635f4\', \'e5633304-2fa1-49f6-ab37-dda69e8b1767\']",)',)
2017-04-19 14:00:53,808+03 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (DefaultQuartzScheduler3) [656e88e4] Command 'TeardownImageVDSCommand(HostName = host_mixed_1, ImageActionsVDSCommandParameters:{runAsync='true', hostId='0ba63f2d-06b2-4392-82ce-7abd60614b2a'})' execution failed: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/e5633304-2fa1-49f6-ab37-dda69e8b1767 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/93789814-fbad-4e01-96a2-d9da02b271e0 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/a09a5b74-464e-4b6b-89e3-a36373f48a10 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/e611febd-20a0-4764-a33e-44ac14b635f4 in use.\']\\na792c4e9-f42e-4ecc-bde9-266bdc23ab36/[\'a09a5b74-464e-4b6b-89e3-a36373f48a10\', \'93789814-fbad-4e01-96a2-d9da02b271e0\', \'e611febd-20a0-4764-a33e-44ac14b635f4\', \'e5633304-2fa1-49f6-ab37-dda69e8b1767\']",)',)
2017-04-19 14:00:53,808+03 ERROR [org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand] (DefaultQuartzScheduler3) [656e88e4] Unable to update the image info for image 'e611febd-20a0-4764-a33e-44ac14b635f4' (image group: '90992b01-486f-4d54-8e1d-435cb889a671') on domain 'a792c4e9-f42e-4ecc-bde9-266bdc23ab36'
2017-04-19 14:04:05,092+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) [7fb1458f] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM host_mixed_1 command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/2a29823b-f8d5-4643-b193-d9ddf1105b35 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/35d1fd45-241d-4b5a-835f-ae14eafd37fd in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/0c953587-1c78-488b-bdec-e42f0e256559 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/a05a9d7f-3c60-4781-936f-cb72346349e7 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/efe1a38a-5127-43b1-a1f0-4cdb1dd76744 in use.\']\\na792c4e9-f42e-4ecc-bde9-266bdc23ab36/[\'a05a9d7f-3c60-4781-936f-cb72346349e7\', \'2a29823b-f8d5-4643-b193-d9ddf1105b35\', \'0c953587-1c78-488b-bdec-e42f0e256559\', \'35d1fd45-241d-4b5a-835f-ae14eafd37fd\', \'efe1a38a-5127-43b1-a1f0-4cdb1dd76744\']",)',)
2017-04-19 14:04:05,092+03 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (DefaultQuartzScheduler5) [7fb1458f] Command 'TeardownImageVDSCommand(HostName = host_mixed_1, ImageActionsVDSCommandParameters:{runAsync='true', hostId='0ba63f2d-06b2-4392-82ce-7abd60614b2a'})' execution failed: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/2a29823b-f8d5-4643-b193-d9ddf1105b35 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/35d1fd45-241d-4b5a-835f-ae14eafd37fd in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/0c953587-1c78-488b-bdec-e42f0e256559 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/a05a9d7f-3c60-4781-936f-cb72346349e7 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/efe1a38a-5127-43b1-a1f0-4cdb1dd76744 in use.\']\\na792c4e9-f42e-4ecc-bde9-266bdc23ab36/[\'a05a9d7f-3c60-4781-936f-cb72346349e7\', \'2a29823b-f8d5-4643-b193-d9ddf1105b35\', \'0c953587-1c78-488b-bdec-e42f0e256559\', \'35d1fd45-241d-4b5a-835f-ae14eafd37fd\', \'efe1a38a-5127-43b1-a1f0-4cdb1dd76744\']",)',)
2017-04-19 14:04:05,093+03 ERROR [org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand] (DefaultQuartzScheduler5) [7fb1458f] Unable to update the image info for image 'efe1a38a-5127-43b1-a1f0-4cdb1dd76744' (image group: '744b0c97-f982-4839-b1e3-0dde1bb8b334') on domain 'a792c4e9-f42e-4ecc-bde9-266bdc23ab36'
2017-04-19 14:05:00,365+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) [56ffe4ce] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM host_mixed_1 command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/e5633304-2fa1-49f6-ab37-dda69e8b1767 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/93789814-fbad-4e01-96a2-d9da02b271e0 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/a09a5b74-464e-4b6b-89e3-a36373f48a10 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/e611febd-20a0-4764-a33e-44ac14b635f4 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/f60979a7-6632-41c9-8015-cd72d5ee101e in use.\']\\na792c4e9-f42e-4ecc-bde9-266bdc23ab36/[\'a09a5b74-464e-4b6b-89e3-a36373f48a10\', \'f60979a7-6632-41c9-8015-cd72d5ee101e\', \'93789814-fbad-4e01-96a2-d9da02b271e0\', \'e611febd-20a0-4764-a33e-44ac14b635f4\', \'e5633304-2fa1-49f6-ab37-dda69e8b1767\']",)',)
2017-04-19 14:05:00,366+03 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (DefaultQuartzScheduler5) [56ffe4ce] Command 'TeardownImageVDSCommand(HostName = host_mixed_1, ImageActionsVDSCommandParameters:{runAsync='true', hostId='0ba63f2d-06b2-4392-82ce-7abd60614b2a'})' execution failed: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/e5633304-2fa1-49f6-ab37-dda69e8b1767 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/93789814-fbad-4e01-96a2-d9da02b271e0 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/a09a5b74-464e-4b6b-89e3-a36373f48a10 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/e611febd-20a0-4764-a33e-44ac14b635f4 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/f60979a7-6632-41c9-8015-cd72d5ee101e in use.\']\\na792c4e9-f42e-4ecc-bde9-266bdc23ab36/[\'a09a5b74-464e-4b6b-89e3-a36373f48a10\', \'f60979a7-6632-41c9-8015-cd72d5ee101e\', \'93789814-fbad-4e01-96a2-d9da02b271e0\', \'e611febd-20a0-4764-a33e-44ac14b635f4\', \'e5633304-2fa1-49f6-ab37-dda69e8b1767\']",)',)
2017-04-19 14:05:00,366+03 ERROR [org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand] (DefaultQuartzScheduler5) [56ffe4ce] Unable to update the image info for image 'f60979a7-6632-41c9-8015-cd72d5ee101e' (image group: '90992b01-486f-4d54-8e1d-435cb889a671') on domain 'a792c4e9-f42e-4ecc-bde9-266bdc23ab36'

Matching VDSM Errors:

2017-04-19 14:00:09,239+0300 ERROR (jsonrpc/3) [storage.TaskManager.Task] (Task='5ff4f048-8644-4e1a-8a77-c4a51062b504') Unexpected error (task:870)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 877, in _run
    return fn(*args, **kargs)
  File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 3194, in teardownImage
    dom.deactivateImage(imgUUID)
  File "/usr/share/vdsm/storage/blockSD.py", line 1278, in deactivateImage
    lvm.deactivateLVs(self.sdUUID, volUUIDs)
  File "/usr/share/vdsm/storage/lvm.py", line 1298, in deactivateLVs
    _setLVAvailability(vgName, toDeactivate, "n")
  File "/usr/share/vdsm/storage/lvm.py", line 843, in _setLVAvailability
    raise error(str(e))
CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/2a29823b-f8d5-4643-b193-d9ddf1105b35 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/35d1fd45-241d-4b5a-835f- in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/0c953587-1c78-488b-bdec-e42f0e256559 in use.\', \'  Logical volume a792c4e9-f42e-4ecc-bde9-266bdc23ab36/a05a9d7f-3c60-4781-936f-cb72346349e7 in use.\']\\na792c4e9-f42e-4ecc-bde9-266bdc23ab36/[\'a05a9d7f-3c60-4781-936f-cb72346349e7\', \'2a29823b-f8d5-4643-b193-d9ddf1105b35\', \'0c953587-1c78-488b-bdec-e42f0e256559\', \'35d1fd45-241d-4b5a-835f-ae14eafd37fd\']",)',)


2017-04-19 14:04:04,977+0300 ERROR (jsonrpc/6) [storage.TaskManager.Task] (Task='a7f4a830-964e-403b-a7f6-b2a0888effef') Unexpected error (task:870)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 877, in _run
    return fn(*args, **kargs)
  File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 3194, in teardownImage
    dom.deactivateImage(imgUUID)
  File "/usr/share/vdsm/storage/blockSD.py", line 1278, in deactivateImage
    lvm.deactivateLVs(self.sdUUID, volUUIDs)
  File "/usr/share/vdsm/storage/lvm.py", line 1298, in deactivateLVs
    _setLVAvailability(vgName, toDeactivate, "n")
  File "/usr/share/vdsm/storage/lvm.py", line 843, in _setLVAvailability
    raise error(str(e))

Comment 21 Avihai 2017-04-19 12:09:46 UTC
Created attachment 1272583 [details]
engine , vdsm , restAPI get

Comment 22 Allon Mureinik 2017-04-20 12:19:22 UTC
(In reply to Avihai from comment #20)
> Description of problem:
> The creation of snapshot after  DC upgrade(4.0->4.1) is faulty.
> As a result, qcow_version is not updated to qcow2_v3 as expected.
> 
> Version-Release number of selected component (if applicable):
> Engine:
> ovirt-engine-4.1.1.8-0.1.el7.noarch
> 
> VDSM:
> 4.19.10.1

I don't understand. Are you reopening this bug? Is this a new bug?

Comment 23 Avihai 2017-04-20 12:39:02 UTC
(In reply to Allon Mureinik from comment #22)
> (In reply to Avihai from comment #20)
> > Description of problem:
> > The creation of snapshot after  DC upgrade(4.0->4.1) is faulty.
> > As a result, qcow_version is not updated to qcow2_v3 as expected.
> > 
> > Version-Release number of selected component (if applicable):
> > Engine:
> > ovirt-engine-4.1.1.8-0.1.el7.noarch
> > 
> > VDSM:
> > 4.19.10.1
> 
> I don't understand. Are you reopening this bug? Is this a new bug?

Hi Allon,
This is another scenario I saw the same issue in engine 4.1.1.8.

I added a detailed description to provide more information to Maor in case this can help or shine a new light on this bug.

I also discussed this with him & it looks like the same trigger & issue.

Anyway, I'm Not reopening as fix is not yet available on master (currently engine 4.1.1.9)

Comment 24 Maor 2017-04-22 20:10:35 UTC
(In reply to Allon Mureinik from comment #22)
> (In reply to Avihai from comment #20)
> > Description of problem:
> > The creation of snapshot after  DC upgrade(4.0->4.1) is faulty.
> > As a result, qcow_version is not updated to qcow2_v3 as expected.
> > 
> > Version-Release number of selected component (if applicable):
> > Engine:
> > ovirt-engine-4.1.1.8-0.1.el7.noarch
> > 
> > VDSM:
> > 4.19.10.1
> 
> I don't understand. Are you reopening this bug? Is this a new bug?

This is the same issue since it is live snapshot of a running VM.

Comment 25 Avihai 2017-04-27 06:38:16 UTC
verified on 4.1.2


Note You need to log in before you can comment on or make changes to this bug.