Bug 1464766

Summary: RESTAPI - Amend does not start after 1st amend fails (due to VDSM kill)
Product: [oVirt] ovirt-engine Reporter: Avihai <aefrat>
Component: BLL.StorageAssignee: Maor <mlipchuk>
Status: CLOSED CURRENTRELEASE QA Contact: Avihai <aefrat>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.1.3.4CC: aefrat, amureini, bugs, stirabos
Target Milestone: ovirt-4.1.4Keywords: AutomationBlocker
Target Release: 4.1.4Flags: rule-engine: ovirt-4.1+
rule-engine: blocker+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-07-28 14:19:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
engine & vdsm logs none

Description Avihai 2017-06-25 14:45:34 UTC
Created attachment 1291720 [details]
engine & vdsm logs

Description of problem:
Amend to diskX does not start after 1st amend failed (due to VDSM kill) 
I do not see in the following amend after 1st amend failure that amend apear , no amend start messsage at engine or in VDSM .

Only message in engine is 'VM vm_TestCase18337_REST_NFS_2515293925 vm_TestCase18337_REST_NFS_2515293925_Disk1 disk was updated ' without amend start/finish event .

The 2nd amend API response is successful to the amend request but still amend does not start .

Version-Release number of selected component (if applicable):
4.1.3.4/4.19.19-1

How reproducible:
100%


Steps to Reproduce:

    1. Create DC + cluster on v3
    2. Create a VM with thin disk and create 5 snapshots
    3. Upgrade cluster &DC to version 4
    4. 1st Amend the snapshot images to version 1.1 & kill vdsmd on SPM 
    5. 2nd amend once the SPM has recovered


Actual results:
Amend does not start at engine/vdsm .
Disk was upgraded at 1st amend but test fails as disksnapshots does not upgrade/amend to qcow2_v3 ..

Expected results:
Amend should start after 1st amend fails . 

Additional info:


1st  Amend before kill VDSM failed :
2017-06-25 15:32:19,837+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler3) [73d127b0] EVENT_ID: USER_AMEND_IMAGE_FINISH_FAILURE(1,330), Correlation ID: 73d127b0, Call Stack: null, Cus
tom Event ID: -1, Message: Failed to amend vm_TestCase18337_REST_NFS_2515293925_Disk1.

2nd Amend was done via event:
Jun 25, 2017 3:33:06 PM

2017-06-25 15:33:06,039 - MainThread - diskattachments - DEBUG - PUT request content is --  url:/ovirt-engine/api/vms/b79896a3-12c3-420f-a2ce-35f807b7b1dd/diskattachments/45ce5457-08d4-494b-9f17-1b55a8ffd3ea 
body:

<disk_attachment>
    <disk>
        <alias>vm_TestCase18337_REST_NFS_2515293925_Disk1</alias>
        <qcow_version>qcow2_v3</qcow_version>
    </disk>
</disk_attachment>
 
2017-06-25 15:33:06,040 - MainThread - diskattachments - INFO - Using Correlation-Id: diskattachments_update_bd2b2558-1b7d-4c85
2017-06-25 15:33:06,423 - MainThread - core_api - DEBUG - Request PUT response time: 0.046
2017-06-25 15:33:06,424 - MainThread - diskattachments - DEBUG - Cleaning Correlation-Id: diskattachments_update_bd2b2558-1b7d-4c85
2017-06-25 15:33:06,424 - MainThread - diskattachments - DEBUG - Response code is valid: [200, 201] 
2017-06-25 15:33:06,425 - MainThread - diskattachments - DEBUG - Response body for PUT request is: 
<disk_attachment href="/ovirt-engine/api/vms/b79896a3-12c3-420f-a2ce-35f807b7b1dd/diskattachments/45ce5457-08d4-494b-9f17-1b55a8ffd3ea" id="45ce5457-08d4-494b-9f17-1b55a8ffd3ea">
    <active>true</active>
    <bootable>true</bootable>
    <interface>virtio</interface>
    <pass_discard>false</pass_discard>
    <read_only>false</read_only>
    <uses_scsi_reservation>false</uses_scsi_reservation>
    <disk href="/ovirt-engine/api/disks/45ce5457-08d4-494b-9f17-1b55a8ffd3ea" id="45ce5457-08d4-494b-9f17-1b55a8ffd3ea"/>
    <vm href="/ovirt-engine/api/vms/b79896a3-12c3-420f-a2ce-35f807b7b1dd" id="b79896a3-12c3-420f-a2ce-35f807b7b1dd"/>
</disk_attachment>
 


Engine log:
1dd=VM_DISK_BOOT]', sharedLocks='[b79896a3-12c3-420f-a2ce-35f807b7b1dd=VM]'}'
2017-06-25 15:33:06,297+03 INFO  [org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default task-24) [diskattachments_update_bd2b2558-1b7d] Running command: UpdateVmDiskCommand internal: false. Entities affected :  ID: 45ce5457
-08d4-494b-9f17-1b55a8ffd3ea Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
2017-06-25 15:33:06,389+03 INFO  [org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default task-24) [diskattachments_update_bd2b2558-1b7d] Lock freed to object 'EngineLock:{exclusiveLocks='[b79896a3-12c3-420f-a2ce-35f807b7b1dd
=VM_DISK_BOOT]', sharedLocks='[b79896a3-12c3-420f-a2ce-35f807b7b1dd=VM]'}'
2017-06-25 15:33:06,404+03 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-24) [diskattachments_update_bd2b2558-1b7d] EVENT_ID: USER_UPDATE_VM_DISK(88), Correlation ID: diskattachments_update_bd2
b2558-1b7d, Call Stack: null, Custom Event ID: -1, Message: VM vm_TestCase18337_REST_NFS_2515293925 vm_TestCase18337_REST_NFS_2515293925_Disk1 disk was updated by admin@internal-authz.

VDSM log:
1st Amend:
2017-06-25 15:32:04,998+0300 INFO  (jsonrpc/1) [vdsm.api] START sdm_amend_volume(job_id='a60f3cfe-8e7a-4d8e-9f15-bbc53f2a2969', vol_info={'generation': 0, 'img_id': '45ce5457-08d4-494b-9f17-1b55a8ffd3ea', 'sd_id': 'ed2661b4-8a64-4c76-92a8-de6b51612eeb', 'vol_id': 'd322e512-5d79-4e9a-830b-c778071e45b6'}, qcow2_attr={'compat': '1.1'}) from=::ffff:10.35.161.181,44302, flow_id=73d127b0 (api:46)
2017-06-25 15:32:04,999+0300 DEBUG (jsonrpc/1) [storage.TaskManager] scheduled job amend_volume for task 9e6c5c90-919c-48eb-8b6d-35c3fcc33544  (taskManager:79)
2017-06-25 15:32:05,000+0300 INFO  (jsonrpc/1) [vdsm.api] FINISH sdm_amend_volume return=None from=::ffff:10.35.161.181,44302, flow_id=73d127b0 (api:52)
2017-06-25 15:32:05,003+0300 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call SDM.amend_volume succeeded in 0.00 seconds (__init__:539)
2017-06-25 15:32:05,006+0300 DEBUG (tasks/3) [storage.TaskManager.Task] (Task='9e6c5c90-919c-48eb-8b6d-35c3fcc33544') Task.run: running job 0: amend_volume: <bound method Job.run of <Job id=a60f3cfe-8e7a-4d8e-9f15-bbc53f2a2969 status=pending at 0x38361936>> (args: () kwargs: {}) (task:909)
2017-06-25 15:32:05,006+0300 DEBUG (tasks/3) [storage.TaskManager.Task] (Task='9e6c5c90-919c-48eb-8b6d-35c3fcc33544') Job.run: running amend_volume: <bound method Job.run of <Job id=a60f3cfe-8e7a-4d8e-9f15-bbc53f2a2969 status=pending at 0x38361936>> (args: () kwargs: {}) callback None (task:330)
[aefrat@aefrat amend_after_vdsm_snapshot_disks]$ cat vdsm.log  | grep -i  amend
2017-06-25 15:32:04,998+0300 INFO  (jsonrpc/1) [vdsm.api] START sdm_amend_volume(job_id='a60f3cfe-8e7a-4d8e-9f15-bbc53f2a2969', vol_info={'generation': 0, 'img_id': '45ce5457-08d4-494b-9f17-1b55a8ffd3ea', 'sd_id': 'ed2661b4-8a64-4c76-92a8-de6b51612eeb', 'vol_id': 'd322e512-5d79-4e9a-830b-c778071e45b6'}, qcow2_attr={'compat': '1.1'}) from=::ffff:10.35.161.181,44302, flow_id=73d127b0 (api:46)
2017-06-25 15:32:04,999+0300 DEBUG (jsonrpc/1) [storage.TaskManager] scheduled job amend_volume for task 9e6c5c90-919c-48eb-8b6d-35c3fcc33544  (taskManager:79)
2017-06-25 15:32:05,000+0300 INFO  (jsonrpc/1) [vdsm.api] FINISH sdm_amend_volume return=None from=::ffff:10.35.161.181,44302, flow_id=73d127b0 (api:52)
2017-06-25 15:32:05,003+0300 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call SDM.amend_volume succeeded in 0.00 seconds (__init__:539)
2017-06-25 15:32:05,006+0300 DEBUG (tasks/3) [storage.TaskManager.Task] (Task='9e6c5c90-919c-48eb-8b6d-35c3fcc33544') Task.run: running job 0: amend_volume: <bound method Job.run of <Job id=a60f3cfe-8e7a-4d8e-9f15-bbc53f2a2969 status=pending at 0x38361936>> (args: () kwargs: {}) (task:909)
2017-06-25 15:32:05,006+0300 DEBUG (tasks/3) [storage.TaskManager.Task] (Task='9e6c5c90-919c-48eb-8b6d-35c3fcc33544') Job.run: running amend_volume: <bound method Job.run of <Job id=a60f3cfe-8e7a-4d8e-9f15-bbc53f2a2969 status=pending at 0x38361936>> (args: () kwargs: {}) callback None (task:330)

Restart VDMS:
2017-06-25 15:32:24,214+0300 INFO  (MainThread) [vds] (PID: 9364) I am the actual vdsm 4.19.19-1.el7ev storage-ge1-vdsm1.scl.lab.tlv.redhat.com (3.10.0-514.el7.x86_64) (vdsm:145)

2nd Amend:
2017-06-25 15:33:06,281+03 INFO  [org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default task-24) [diskattachments_update_bd2b2558-1b7d] Lock Acquired to object 'EngineLock:{exclusiveLocks='[b79896a3-12c3-420f-a2ce-35f807b7b
1dd=VM_DISK_BOOT]', sharedLocks='[b79896a3-12c3-420f-a2ce-35f807b7b1dd=VM]'}'
2017-06-25 15:33:06,297+03 INFO  [org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default task-24) [diskattachments_update_bd2b2558-1b7d] Running command: UpdateVmDiskCommand internal: false. Entities affected :  ID: 45ce5457
-08d4-494b-9f17-1b55a8ffd3ea Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
2017-06-25 15:33:06,389+03 INFO  [org.ovirt.engine.core.bll.storage.disk.UpdateVmDiskCommand] (default task-24) [diskattachments_update_bd2b2558-1b7d] Lock freed to object 'EngineLock:{exclusiveLocks='[b79896a3-12c3-420f-a2ce-35f807b7b1dd
=VM_DISK_BOOT]', sharedLocks='[b79896a3-12c3-420f-a2ce-35f807b7b1dd=VM]'}'
2017-06-25 15:33:06,404+03 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-24) [diskattachments_update_bd2b2558-1b7d] EVENT_ID: USER_UPDATE_VM_DISK(88), Correlation ID: diskattachments_update_bd2b2558-1b7d, Call Stack: null, Custom Event ID: -1, Message: VM vm_TestCase18337_REST_NFS_2515293925 vm_TestCase18337_REST_NFS_2515293925_Disk1 disk was updated by admin@internal-authz.


Disk name = vm_TestCase18337_REST_NFS_2515293925_Disk1 
DISK ID = 45ce5457-08d4-494b-9f17-1b55a8ffd3ea
VM ID = 	b79896a3-12c3-420f-a2ce-35f807b7b1dd


Automation timeline :
2017-06-25 15:28:10,956 - MainThread - art.ll_lib.dcs - INFO - Add datacenter dc_upgrade_4_0_to_4_1 with {'version': '4.0', 'name': 'dc_upgrade_4_0_to_4_1'}
2017-06-25 15:28:10,957 - MainThread - datacenters - INFO - Using Correlation-Id: datacenters_create_b1936d6c-40b0-4fcd
2017-06-25 15:28:11,427 - MainThread - datacenters - INFO - New entity was added
2017-06-25 15:28:11,433 - MainThread - datacenters - INFO - Property 'data_center->name' has correct value: dc_upgrade_4_0_to_4_1
2017-06-25 15:28:11,434 - MainThread - datacenters - INFO - Property 'data_center->local' has correct value: false
2017-06-25 15:28:11,435 - MainThread - datacenters - INFO - Property 'data_center->version->major' has correct value: 4
2017-06-25 15:28:11,436 - MainThread - datacenters - INFO - Property 'data_center->version->minor' has correct value: 0
2017-06-25 15:28:11,595 - MainThread - datacenters - INFO - Validated supported versions of data center dc_upgrade_4_0_to_4_1
2017-06-25 15:28:11,596 - MainThread - art.rhevm_api.tests_lib.low_level.clusters - INFO - Create cluster cluster_upgrade_4_0_to_4_1 with {'data_center': 'dc_upgrade_4_0_to_4_1', 'name': 'cluster_upgrade_4_0_to_4_1', 'version': '4.0', 'cpu': 'Intel Conroe Family'}
2017-06-25 15:28:11,765 - MainThread - clusters - INFO - Using Correlation-Id: clusters_create_5d4b5519-adae-4b97
2017-06-25 15:28:12,471 - MainThread - clusters - INFO - New entity was added
2017-06-25 15:28:12,482 - MainThread - clusters - INFO - Property 'cluster->data_center->id' has correct value: 897cc17e-bafe-486d-8581-c9f16eef5d67
2017-06-25 15:28:12,483 - MainThread - clusters - INFO - Property 'cluster->name' has correct value: cluster_upgrade_4_0_to_4_1
2017-06-25 15:28:12,484 - MainThread - clusters - INFO - Property 'cluster->version->major' has correct value: 4
2017-06-25 15:28:12,484 - MainThread - clusters - INFO - Property 'cluster->version->minor' has correct value: 0
2017-06-25 15:28:12,485 - MainThread - clusters - INFO - Property 'cluster->cpu->type_' has correct value: Intel Conroe Family
2017-06-25 15:28:12,494 - MainThread - root - INFO - Switch host host_mixed_1 to different cluster cluster_upgrade_4_0_to_4_1 with {'activate': True}
2017-06-25 15:28:12,497 - MainThread - root - INFO - Checks if host host_mixed_1 is in maintenance state
2017-06-25 15:28:12,500 - MainThread - root - INFO - Returns host host_mixed_1 status
2017-06-25 15:28:12,503 - MainThread - root - INFO - Get host object by host_name host_mixed_1. with {'attribute': 'name'}
2017-06-25 15:28:12,733 - MainThread - root - INFO - Update properties of host host_mixed_1 (provided in parameters) with {'cluster': 'cluster_upgrade_4_0_to_4_1'}
2017-06-25 15:28:12,738 - MainThread - root - DEBUG - Skipping duplicate log-messages...
2017-06-25 15:28:13,129 - MainThread - hosts - INFO - Using Correlation-Id: hosts_update_d809ec1e-7d6f-4327
2017-06-25 15:28:13,948 - MainThread - hosts - INFO - host was updated
2017-06-25 15:28:13,962 - MainThread - hosts - INFO - Property 'host->cluster->id' has correct value: 305a5f68-fec4-4abb-b63a-bc097dd5de7b
2017-06-25 15:28:13,975 - MainThread - root - INFO - Activate host host_mixed_1 (set status to UP) with {'wait': True}
2017-06-25 15:28:13,978 - MainThread - root - INFO - Get host object by host_name host_mixed_1. with {'attribute': 'name'}
2017-06-25 15:28:14,218 - MainThread - hosts - INFO - Using Correlation-Id: hosts_syncAction_e0f562dc-c53d-414c
2017-06-25 15:28:24,864 - MainThread - hosts - INFO - host status is 'up'
2017-06-25 15:28:24,875 - MainThread - root - INFO - Get SPM host from the list of hosts host_mixed_1,host_mixed_2,host_mixed_3
2017-06-25 15:28:24,883 - MainThread - root - INFO - Checking Storage Pool Manager (SPM) status of the host host_mixed_1.
2017-06-25 15:28:24,888 - MainThread - root - INFO - Get host object by host_name host_mixed_1. with {'attribute': 'name'}
2017-06-25 15:28:25,136 - MainThread - art.ll_lib.hosts - INFO - check_host_spm_status - SPM Status of host host_mixed_1 is: none
2017-06-25 15:28:25,138 - MainThread - root - ERROR - Failed to checking storage pool manager (spm) status of the host host_mixed_1.
2017-06-25 15:28:25,146 - MainThread - root - INFO - Checking Storage Pool Manager (SPM) status of the host host_mixed_2.
2017-06-25 15:28:25,152 - MainThread - root - INFO - Get host object by host_name host_mixed_2. with {'attribute': 'name'}
2017-06-25 15:28:25,387 - MainThread - art.ll_lib.hosts - INFO - check_host_spm_status - SPM Status of host host_mixed_2 is: none
2017-06-25 15:28:25,388 - MainThread - root - ERROR - Failed to checking storage pool manager (spm) status of the host host_mixed_2.
2017-06-25 15:28:25,394 - MainThread - root - INFO - Checking Storage Pool Manager (SPM) status of the host host_mixed_3.
2017-06-25 15:28:25,398 - MainThread - root - INFO - Get host object by host_name host_mixed_3. with {'attribute': 'name'}
2017-06-25 15:28:25,640 - MainThread - art.ll_lib.hosts - INFO - check_host_spm_status - SPM Status of host host_mixed_3 is: spm
2017-06-25 15:28:25,642 - MainThread - art.hl_lib.sds - INFO - Add NFS storage domain upgrade_4_0_to_4_1nfs0 to datacenter dc_upgrade_4_0_to_4_1
2017-06-25 15:28:25,888 - MainThread - root - INFO - Get Host host_mixed_3 compatibility version
2017-06-25 15:28:25,892 - MainThread - root - DEBUG - Skipping duplicate log-messages...
2017-06-25 15:28:26,288 - MainThread - root - INFO - Get Cluster golden_env_mixed_1 compatibility version
2017-06-25 15:28:26,448 - MainThread - storagedomains - INFO - Using Correlation-Id: storagedomains_create_d04115c8-246d-4055
2017-06-25 15:28:35,671 - MainThread - storagedomains - INFO - New entity was added
2017-06-25 15:28:35,673 - MainThread - storagedomains - INFO - Property 'storage_domain->storage->address' has correct value: yellow-vdsb.qa.lab.tlv.redhat.com
2017-06-25 15:28:35,673 - MainThread - storagedomains - INFO - Property 'storage_domain->storage->path' has correct value: /Storage_NFS/storage_local_ge1_nfs_3
2017-06-25 15:28:35,674 - MainThread - storagedomains - INFO - Property 'storage_domain->storage->type_' has correct value: nfs
2017-06-25 15:28:35,674 - MainThread - storagedomains - INFO - Property 'storage_domain->storage_format' has correct value: v3
2017-06-25 15:28:35,674 - MainThread - storagedomains - INFO - Property 'storage_domain->name' has correct value: upgrade_4_0_to_4_1nfs0
2017-06-25 15:28:35,675 - MainThread - storagedomains - INFO - Attribute: import___ changed to: import
2017-06-25 15:28:35,675 - MainThread - storagedomains - INFO - Property 'storage_domain->type_' has correct value: data
2017-06-25 15:28:36,258 - MainThread - storagedomains - INFO - Using Correlation-Id: storagedomains_create_d0704997-cc75-4190
2017-06-25 15:29:35,054 - MainThread - storagedomains - INFO - New entity was added
2017-06-25 15:29:35,060 - MainThread - storagedomains - INFO - Property 'storage_domain->id' has correct value: ed2661b4-8a64-4c76-92a8-de6b51612eeb
2017-06-25 15:29:35,062 - MainThread - storagedomains - INFO - Attribute: import___ changed to: import
2017-06-25 15:29:35,463 - MainThread - datacenters - INFO - data_center status is 'up'
2017-06-25 15:29:36,254 - MainThread - storagedomains - WARNING - Storage domain upgrade_4_0_to_4_1nfs0 already activated
2017-06-25 15:29:36,255 - MainThread - art.ll_lib.jobs - INFO - Waiting for jobs ['Adding .* Storage Domain .*', 'Activating Storage Domain .* on Data Center .*']
2017-06-25 15:29:36,412 - MainThread - art.ll_lib.jobs - INFO - Active jobs: []
2017-06-25 15:29:36,567 - MainThread - art.ll_lib.jobs - INFO - JOB 'Adding NFS Storage Domain upgrade_4_0_to_4_1nfs0 to Data Center <UNKNOWN>' TOOK 4.035 seconds
2017-06-25 15:29:36,716 - MainThread - art.ll_lib.jobs - INFO - All jobs are gone
2017-06-25 15:29:39,256 - MainThread - test_utils - INFO - All tasks are gone
2017-06-25 15:29:39,260 - MainThread - root - INFO - Getting all templates in cluster cluster_upgrade_4_0_to_4_1
2017-06-25 15:29:39,661 - MainThread - root - INFO - Templates in cluster: []
2017-06-25 15:29:39,679 - MainThread - root - INFO - Description vm_TestCase18337_REST_NFS_2515293925: add new vm (without starting it) with {'name': 'vm_TestCase18337_REST_NFS_2515293925', 'cpu_socket': 1, 'copy_permissions': False, 'cpu_cores': 1, 'cluster': 'cluster_upgrade_4_0_to_4_1', 'display_type': 'spice', 'template': 'Blank', 'memory': 1073741824, 'async': False, 'os_type': 'RHEL6x64', 'type': 'desktop', 'wait': True}
2017-06-25 15:29:39,685 - MainThread - root - INFO - Gets template object by template_name Blank and specific template version 1 with {'all_content': False}
2017-06-25 15:29:39,691 - MainThread - root - INFO - Get template objects by using the template_name Blank with {'all_content': False}
2017-06-25 15:29:40,021 - MainThread - root - DEBUG - Skipping duplicate log-messages...
2017-06-25 15:29:40,027 - MainThread - root - DEBUG - Skipping duplicate log-messages...
2017-06-25 15:29:40,352 - MainThread - vms - INFO - Using Correlation-Id: vms_create_8321725e-50ed-4e4c
2017-06-25 15:29:42,202 - MainThread - vms - INFO - New entity was added
2017-06-25 15:29:42,208 - MainThread - vms - INFO - Property 'vm->memory' has correct value: 1073741824
2017-06-25 15:29:42,208 - MainThread - vms - INFO - Property 'vm->name' has correct value: vm_TestCase18337_REST_NFS_2515293925
2017-06-25 15:29:42,208 - MainThread - vms - INFO - Property 'vm->display->type_' has correct value: spice
2017-06-25 15:29:42,209 - MainThread - vms - INFO - Property 'vm->cluster->id' has correct value: 305a5f68-fec4-4abb-b63a-bc097dd5de7b
2017-06-25 15:29:42,211 - MainThread - vms - INFO - Property 'vm->type_' has correct value: desktop
2017-06-25 15:29:42,211 - MainThread - vms - INFO - Property 'vm->template->id' has correct value: 00000000-0000-0000-0000-000000000000
2017-06-25 15:29:42,212 - MainThread - vms - INFO - Property 'vm->description' has correct value: vm_TestCase18337_REST_NFS_2515293925
2017-06-25 15:29:42,212 - MainThread - vms - INFO - Property 'vm->os->type_' has correct value: rhel_6x64
2017-06-25 15:29:42,212 - MainThread - vms - INFO - Property 'vm->cpu->topology->cores' has correct value: 1
2017-06-25 15:29:42,212 - MainThread - vms - INFO - Property 'vm->cpu->topology->sockets' has correct value: 1
2017-06-25 15:29:42,541 - MainThread - vms - INFO - vm status is 'down'
2017-06-25 15:29:42,555 - MainThread - root - INFO - Add vNIC name nic1 to VM vm_TestCase18337_REST_NFS_2515293925 with {'vnic_profile': 'ovirtmgmt', 'network': 'ovirtmgmt', 'plugged': 'true', 'interface': 'virtio', 'linked': 'true'}
2017-06-25 15:29:43,471 - MainThread - root - INFO - Get VNIC profile object. with {'cluster': 'cluster_upgrade_4_0_to_4_1', 'name': 'ovirtmgmt', 'network': 'ovirtmgmt'}
2017-06-25 15:29:43,477 - MainThread - root - INFO - Get all the VNIC profiles that belong to a certain network ovirtmgmt with {'cluster': 'cluster_upgrade_4_0_to_4_1'}
2017-06-25 15:29:43,481 - MainThread - root - INFO - Find desired network ovirtmgmt using cluster cluster_upgrade_4_0_to_4_1 or data center
2017-06-25 15:29:45,020 - MainThread - art.ll_lib.vms - INFO - Get NICs href from VM vm_TestCase18337_REST_NFS_2515293925
2017-06-25 15:29:45,021 - MainThread - art.ll_lib.vms - INFO - Add vNIC nic1 with {'vnic_profile': 'ovirtmgmt', 'network': 'ovirtmgmt', 'plugged': 'true', 'interface': 'virtio', 'linked': 'true', 'name': 'nic1'} to VM vm_TestCase18337_REST_NFS_2515293925
2017-06-25 15:29:45,211 - MainThread - nics - INFO - Using Correlation-Id: nics_create_4647455a-dd18-4a29
2017-06-25 15:29:46,117 - MainThread - nics - INFO - New entity was added
2017-06-25 15:29:46,122 - MainThread - nics - INFO - Property 'nic->name' has correct value: nic1
2017-06-25 15:29:46,124 - MainThread - nics - INFO - Property 'nic->interface' has correct value: virtio
2017-06-25 15:29:46,125 - MainThread - nics - INFO - Property 'nic->linked' has correct value: true
2017-06-25 15:29:46,127 - MainThread - nics - INFO - Property 'nic->vnic_profile->id' has correct value: c2f2c9bc-cb4b-494f-8908-c5740438065c
2017-06-25 15:29:46,129 - MainThread - nics - INFO - Property 'nic->plugged' has correct value: true
2017-06-25 15:29:46,343 - MainThread - vms - INFO - vm status is 'down'
2017-06-25 15:29:47,313 - MainThread - art.ll_lib.vms - INFO - Add disk <art.rhevm_api.data_struct.data_structures.Disk object at 0x7f2228d40750> with {'format': 'cow', 'bootable': True, 'wipe_after_delete': 'false', 'active': True, 'sparse': True, 'interface': 'virtio', 'type': 'data'} provisioned size: 6442450944
2017-06-25 15:29:47,319 - MainThread - diskattachments - INFO - Using Correlation-Id: diskattachments_create_485d21f3-dd6f-46a0
2017-06-25 15:29:49,380 - MainThread - diskattachments - INFO - New entity was added
2017-06-25 15:29:49,381 - MainThread - diskattachments - INFO - Property 'disk_attachment->interface' has correct value: virtio
2017-06-25 15:29:49,382 - MainThread - diskattachments - INFO - Property 'disk_attachment->bootable' has correct value: true
2017-06-25 15:30:10,285 - MainThread - disks - INFO - disk status is 'ok'
2017-06-25 15:30:10,302 - MainThread - root - INFO - Description snap_TestCase18337_REST_NFS_25153010280: Add snapshot to VM vm_TestCase18337_REST_NFS_2515293925 with {'wait': True}
2017-06-25 15:30:10,856 - MainThread - snapshots - INFO - Using Correlation-Id: snapshots_create_37c76d6e-5003-4b93
2017-06-25 15:30:14,133 - MainThread - snapshots - INFO - New entity was added
2017-06-25 15:30:14,142 - MainThread - snapshots - INFO - Property 'snapshot->description' has correct value: snap_TestCase18337_REST_NFS_25153010280
2017-06-25 15:30:14,151 - MainThread - art.ll_lib.jobs - INFO - Waiting for jobs ['Creating VM Snapshot .* for VM .*']
2017-06-25 15:30:14,296 - MainThread - art.ll_lib.jobs - INFO - Active jobs: ['Creating VM Snapshot snap_TestCase18337_REST_NFS_25153010280 for VM vm_TestCase18337_REST_NFS_2515293925']
2017-06-25 15:30:24,642 - MainThread - art.ll_lib.jobs - INFO - Active jobs: []
2017-06-25 15:30:24,845 - MainThread - art.ll_lib.jobs - INFO - JOB 'Creating VM Snapshot snap_TestCase18337_REST_NFS_25153010280 for VM vm_TestCase18337_REST_NFS_2515293925' TOOK 9.877 seconds
2017-06-25 15:30:24,846 - MainThread - art.ll_lib.jobs - INFO - All jobs are gone
2017-06-25 15:30:25,727 - MainThread - art.ll_lib.vms - INFO - Snapshots found: ['Active VM', 'snap_TestCase18337_REST_NFS_25153010280']
2017-06-25 15:30:27,144 - MainThread - snapshots - INFO - snapshot status is 'ok'
2017-06-25 15:30:27,901 - MainThread - art.ll_lib.vms - DEBUG - Skipping duplicate log-messages...
2017-06-25 15:30:27,902 - MainThread - art.ll_lib.vms - INFO - Waiting until snapshots: snap_TestCase18337_REST_NFS_25153010280 of vm_TestCase18337_REST_NFS_2515293925 vm are in one of following states: ['ok']
2017-06-25 15:30:28,629 - MainThread - art.ll_lib.jobs - INFO - Waiting for jobs ['Creating VM Snapshot .* for VM .*']
2017-06-25 15:30:28,780 - MainThread - art.ll_lib.jobs - INFO - Active jobs: []
2017-06-25 15:30:29,016 - MainThread - art.ll_lib.jobs - INFO - JOB 'Creating VM Snapshot snap_TestCase18337_REST_NFS_25153010280 for VM vm_TestCase18337_REST_NFS_2515293925' TOOK 9.877 seconds
2017-06-25 15:30:29,017 - MainThread - art.ll_lib.jobs - INFO - All jobs are gone
2017-06-25 15:30:29,028 - MainThread - root - INFO - Description snap_TestCase18337_REST_NFS_25153029011: Add snapshot to VM vm_TestCase18337_REST_NFS_2515293925 with {'wait': True}
2017-06-25 15:30:29,409 - MainThread - snapshots - INFO - Using Correlation-Id: snapshots_create_ee238a71-edcc-4282
2017-06-25 15:30:32,892 - MainThread - snapshots - INFO - New entity was added
2017-06-25 15:30:32,894 - MainThread - snapshots - INFO - Property 'snapshot->description' has correct value: snap_TestCase18337_REST_NFS_25153029011
2017-06-25 15:30:32,897 - MainThread - art.ll_lib.jobs - INFO - Waiting for jobs ['Creating VM Snapshot .* for VM .*']
2017-06-25 15:30:33,038 - MainThread - art.ll_lib.jobs - INFO - Active jobs: ['Creating VM Snapshot snap_TestCase18337_REST_NFS_25153029011 for VM vm_TestCase18337_REST_NFS_2515293925']
2017-06-25 15:30:43,377 - MainThread - art.ll_lib.jobs - DEBUG - Skipping duplicate log-messages...
2017-06-25 15:30:48,555 - MainThread - art.ll_lib.jobs - INFO - Active jobs: []
2017-06-25 15:30:48,722 - MainThread - art.ll_lib.jobs - INFO - JOB 'Creating VM Snapshot snap_TestCase18337_REST_NFS_25153029011 for VM vm_TestCase18337_REST_NFS_2515293925' TOOK 16.869 seconds
2017-06-25 15:30:48,723 - MainThread - art.ll_lib.jobs - INFO - All jobs are gone
2017-06-25 15:30:49,821 - MainThread - art.ll_lib.vms - INFO - Snapshots found: ['Active VM', 'snap_TestCase18337_REST_NFS_25153010280', 'snap_TestCase18337_REST_NFS_25153029011']
2017-06-25 15:30:51,123 - MainThread - snapshots - INFO - snapshot status is 'ok'
2017-06-25 15:30:51,992 - MainThread - art.ll_lib.vms - DEBUG - Skipping duplicate log-messages...
2017-06-25 15:30:51,994 - MainThread - art.ll_lib.vms - INFO - Waiting until snapshots: snap_TestCase18337_REST_NFS_25153029011 of vm_TestCase18337_REST_NFS_2515293925 vm are in one of following states: ['ok']
2017-06-25 15:30:52,861 - MainThread - art.ll_lib.jobs - INFO - Waiting for jobs ['Creating VM Snapshot .* for VM .*']
2017-06-25 15:30:53,031 - MainThread - art.ll_lib.jobs - INFO - Active jobs: []
2017-06-25 15:30:53,171 - MainThread - art.ll_lib.jobs - INFO - JOB 'Creating VM Snapshot snap_TestCase18337_REST_NFS_25153029011 for VM vm_TestCase18337_REST_NFS_2515293925' TOOK 16.869 seconds
2017-06-25 15:30:53,172 - MainThread - art.ll_lib.jobs - INFO - All jobs are gone
2017-06-25 15:30:53,183 - MainThread - root - INFO - Description snap_TestCase18337_REST_NFS_25153053172: Add snapshot to VM vm_TestCase18337_REST_NFS_2515293925 with {'wait': True}
2017-06-25 15:30:53,541 - MainThread - snapshots - INFO - Using Correlation-Id: snapshots_create_ba6cfa70-42e4-4684
2017-06-25 15:30:56,216 - MainThread - snapshots - INFO - New entity was added
2017-06-25 15:30:56,223 - MainThread - snapshots - INFO - Property 'snapshot->description' has correct value: snap_TestCase18337_REST_NFS_25153053172
2017-06-25 15:30:56,227 - MainThread - art.ll_lib.jobs - INFO - Waiting for jobs ['Creating VM Snapshot .* for VM .*']
2017-06-25 15:30:56,383 - MainThread - art.ll_lib.jobs - INFO - Active jobs: ['Creating VM Snapshot snap_TestCase18337_REST_NFS_25153053172 for VM vm_TestCase18337_REST_NFS_2515293925']
2017-06-25 15:31:06,727 - MainThread - art.ll_lib.jobs - INFO - Active jobs: []
2017-06-25 15:31:06,864 - MainThread - art.ll_lib.jobs - INFO - JOB 'Creating VM Snapshot snap_TestCase18337_REST_NFS_25153053172 for VM vm_TestCase18337_REST_NFS_2515293925' TOOK 9.398 seconds
2017-06-25 15:31:06,865 - MainThread - art.ll_lib.jobs - INFO - All jobs are gone
2017-06-25 15:31:08,051 - MainThread - art.ll_lib.vms - INFO - Snapshots found: ['Active VM', 'snap_TestCase18337_REST_NFS_25153010280', 'snap_TestCase18337_REST_NFS_25153029011', 'snap_TestCase18337_REST_NFS_25153053172']
2017-06-25 15:31:09,388 - MainThread - snapshots - INFO - snapshot status is 'ok'
2017-06-25 15:31:10,364 - MainThread - art.ll_lib.vms - DEBUG - Skipping duplicate log-messages...
2017-06-25 15:31:10,365 - MainThread - art.ll_lib.vms - INFO - Waiting until snapshots: snap_TestCase18337_REST_NFS_25153053172 of vm_TestCase18337_REST_NFS_2515293925 vm are in one of following states: ['ok']
2017-06-25 15:31:11,388 - MainThread - art.ll_lib.jobs - INFO - Waiting for jobs ['Creating VM Snapshot .* for VM .*']
2017-06-25 15:31:11,523 - MainThread - art.ll_lib.jobs - INFO - Active jobs: []
2017-06-25 15:31:11,650 - MainThread - art.ll_lib.jobs - INFO - JOB 'Creating VM Snapshot snap_TestCase18337_REST_NFS_25153053172 for VM vm_TestCase18337_REST_NFS_2515293925' TOOK 9.398 seconds
2017-06-25 15:31:11,650 - MainThread - art.ll_lib.jobs - INFO - All jobs are gone
2017-06-25 15:31:11,653 - MainThread - root - INFO - Description snap_TestCase18337_REST_NFS_25153111653: Add snapshot to VM vm_TestCase18337_REST_NFS_2515293925 with {'wait': True}
2017-06-25 15:31:12,002 - MainThread - snapshots - INFO - Using Correlation-Id: snapshots_create_f92cf347-4bc0-42cd
2017-06-25 15:31:13,908 - MainThread - snapshots - INFO - New entity was added
2017-06-25 15:31:13,913 - MainThread - snapshots - INFO - Property 'snapshot->description' has correct value: snap_TestCase18337_REST_NFS_25153111653
2017-06-25 15:31:13,916 - MainThread - art.ll_lib.jobs - INFO - Waiting for jobs ['Creating VM Snapshot .* for VM .*']
2017-06-25 15:31:14,113 - MainThread - art.ll_lib.jobs - INFO - Active jobs: ['Creating VM Snapshot snap_TestCase18337_REST_NFS_25153111653 for VM vm_TestCase18337_REST_NFS_2515293925']
2017-06-25 15:31:24,481 - MainThread - art.ll_lib.jobs - INFO - Active jobs: []
2017-06-25 15:31:24,634 - MainThread - art.ll_lib.jobs - INFO - JOB 'Creating VM Snapshot snap_TestCase18337_REST_NFS_25153111653 for VM vm_TestCase18337_REST_NFS_2515293925' TOOK 9.358 seconds
2017-06-25 15:31:24,634 - MainThread - art.ll_lib.jobs - INFO - All jobs are gone
2017-06-25 15:31:25,845 - MainThread - art.ll_lib.vms - INFO - Snapshots found: ['Active VM', 'snap_TestCase18337_REST_NFS_25153010280', 'snap_TestCase18337_REST_NFS_25153029011', 'snap_TestCase18337_REST_NFS_25153053172', 'snap_TestCase18337_REST_NFS_25153111653']
2017-06-25 15:31:27,273 - MainThread - snapshots - INFO - snapshot status is 'ok'
2017-06-25 15:31:28,411 - MainThread - art.ll_lib.vms - DEBUG - Skipping duplicate log-messages...
2017-06-25 15:31:28,412 - MainThread - art.ll_lib.vms - INFO - Waiting until snapshots: snap_TestCase18337_REST_NFS_25153111653 of vm_TestCase18337_REST_NFS_2515293925 vm are in one of following states: ['ok']
2017-06-25 15:31:29,472 - MainThread - art.ll_lib.jobs - INFO - Waiting for jobs ['Creating VM Snapshot .* for VM .*']
2017-06-25 15:31:29,616 - MainThread - art.ll_lib.jobs - INFO - Active jobs: []
2017-06-25 15:31:29,801 - MainThread - art.ll_lib.jobs - INFO - JOB 'Creating VM Snapshot snap_TestCase18337_REST_NFS_25153111653 for VM vm_TestCase18337_REST_NFS_2515293925' TOOK 9.358 seconds
2017-06-25 15:31:29,802 - MainThread - art.ll_lib.jobs - INFO - All jobs are gone
2017-06-25 15:31:29,813 - MainThread - root - INFO - Description snap_TestCase18337_REST_NFS_25153129804: Add snapshot to VM vm_TestCase18337_REST_NFS_2515293925 with {'wait': True}
2017-06-25 15:31:30,197 - MainThread - snapshots - INFO - Using Correlation-Id: snapshots_create_fb4395da-6508-4834
2017-06-25 15:31:32,779 - MainThread - snapshots - INFO - New entity was added
2017-06-25 15:31:32,783 - MainThread - snapshots - INFO - Property 'snapshot->description' has correct value: snap_TestCase18337_REST_NFS_25153129804
2017-06-25 15:31:32,786 - MainThread - art.ll_lib.jobs - INFO - Waiting for jobs ['Creating VM Snapshot .* for VM .*']
2017-06-25 15:31:32,934 - MainThread - art.ll_lib.jobs - INFO - Active jobs: ['Creating VM Snapshot snap_TestCase18337_REST_NFS_25153129804 for VM vm_TestCase18337_REST_NFS_2515293925']
2017-06-25 15:31:43,306 - MainThread - art.ll_lib.jobs - DEBUG - Skipping duplicate log-messages...
2017-06-25 15:31:48,498 - MainThread - art.ll_lib.jobs - INFO - Active jobs: []
2017-06-25 15:31:48,654 - MainThread - art.ll_lib.jobs - INFO - JOB 'Creating VM Snapshot snap_TestCase18337_REST_NFS_25153129804 for VM vm_TestCase18337_REST_NFS_2515293925' TOOK 16.7 seconds
2017-06-25 15:31:48,656 - MainThread - art.ll_lib.jobs - INFO - All jobs are gone
2017-06-25 15:31:50,221 - MainThread - art.ll_lib.vms - INFO - Snapshots found: ['Active VM', 'snap_TestCase18337_REST_NFS_25153010280', 'snap_TestCase18337_REST_NFS_25153029011', 'snap_TestCase18337_REST_NFS_25153053172', 'snap_TestCase18337_REST_NFS_25153111653', 'snap_TestCase18337_REST_NFS_25153129804']
2017-06-25 15:31:52,240 - MainThread - snapshots - INFO - snapshot status is 'ok'
2017-06-25 15:31:53,557 - MainThread - art.ll_lib.vms - DEBUG - Skipping duplicate log-messages...
2017-06-25 15:31:53,558 - MainThread - art.ll_lib.vms - INFO - Waiting until snapshots: snap_TestCase18337_REST_NFS_25153129804 of vm_TestCase18337_REST_NFS_2515293925 vm are in one of following states: ['ok']
2017-06-25 15:31:54,915 - MainThread - art.ll_lib.jobs - INFO - Waiting for jobs ['Creating VM Snapshot .* for VM .*']
2017-06-25 15:31:55,064 - MainThread - art.ll_lib.jobs - INFO - Active jobs: []
2017-06-25 15:31:55,210 - MainThread - art.ll_lib.jobs - INFO - JOB 'Creating VM Snapshot snap_TestCase18337_REST_NFS_25153129804 for VM vm_TestCase18337_REST_NFS_2515293925' TOOK 16.7 seconds
2017-06-25 15:31:55,211 - MainThread - art.ll_lib.jobs - INFO - All jobs are gone
2017-06-25 15:31:55,219 - MainThread - art.logging - INFO - ================================================================================
2017-06-25 15:31:55,220 - MainThread - art.logging - INFO - Test Name: rhevmtests.storage.storage_qcow2_v3.test_qcow2_v3.TestCase18337_REST_NFS.test_failure_of_SPM_during_amend
2017-06-25 15:31:55,220 - MainThread - art.logging - INFO - Iteration number: 1
2017-06-25 15:31:55,917 - MainThread - root - INFO - Update cluster cluster_upgrade_4_0_to_4_1 with {'version': '4.1'}
2017-06-25 15:31:56,080 - MainThread - clusters - INFO - Using Correlation-Id: clusters_update_a9d08035-2ed3-42f7
2017-06-25 15:31:56,764 - MainThread - clusters - INFO - cluster was updated
2017-06-25 15:31:56,770 - MainThread - clusters - INFO - Property 'cluster->version->major' has correct value: 4
2017-06-25 15:31:56,771 - MainThread - clusters - INFO - Property 'cluster->version->minor' has correct value: 1
2017-06-25 15:31:56,947 - MainThread - art.ll_lib.dcs - INFO - Update datacenter dc_upgrade_4_0_to_4_1 with {'version': '4.1'}
2017-06-25 15:31:56,949 - MainThread - datacenters - INFO - Using Correlation-Id: datacenters_update_8412fe1b-d0b5-4a4a
2017-06-25 15:32:00,927 - MainThread - datacenters - INFO - data_center was updated
2017-06-25 15:32:00,931 - MainThread - datacenters - INFO - Property 'data_center->version->major' has correct value: 4
2017-06-25 15:32:00,932 - MainThread - datacenters - INFO - Property 'data_center->version->minor' has correct value: 1
2017-06-25 15:32:01,689 - MainThread - rhevmtests.storage.storage_qcow2_v3.test_qcow2_v3 - INFO - Checking that upgrade_4_0_to_4_1nfs0 was upgraded: True
2017-06-25 15:32:03,049 - MainThread - diskattachments - INFO - Using Correlation-Id: diskattachments_update_6bc3a083-ceb2-435d
2017-06-25 15:32:03,903 - MainThread - diskattachments - INFO - disk_attachment was updated
2017-06-25 15:32:03,914 - MainThread - root - INFO - Get host resource by host_name host_mixed_1
2017-06-25 15:32:03,915 - MainThread - VDS - INFO - [10.35.82.51] Executing command pgrep vdsm
2017-06-25 15:32:05,631 - MainThread - VDS - INFO - [10.35.82.51] Executing command pgrep super
2017-06-25 15:32:06,931 - MainThread - art.ll_lib.hosts - INFO - kill VDSM pid: 17390 on host 10.35.82.51
2017-06-25 15:32:06,932 - MainThread - VDS - INFO - [10.35.82.51] Executing command kill -9 17390
2017-06-25 15:32:08,141 - MainThread - root - INFO - Get host object by host_name host_mixed_1. with {'attribute': 'name'}
2017-06-25 15:32:09,157 - MainThread - art.ll_lib.hosts - INFO - Check if host host_mixed_1 has state connecting
2017-06-25 15:32:19,683 - MainThread - art.ll_lib.hosts - INFO - Host host_mixed_1 has state connecting
2017-06-25 15:32:19,684 - MainThread - art.hl_lib.dcs - INFO - Wait for the Data center to become active
2017-06-25 15:32:19,874 - MainThread - datacenters - INFO - Waiting for query `name=dc_upgrade_4_0_to_4_1 and status=up` and event_id None up to 180 seconds,sampling every 10 second.
2017-06-25 15:33:00,652 - MainThread - root - INFO - Waits until SPM gets elected in DataCenter dc_upgrade_4_0_to_4_1 with {'sleep': 5, 'timeout': 300}
2017-06-25 15:33:00,663 - MainThread - root - INFO - Locate and return SPM or HSM host from specific datacenter dc_upgrade_4_0_to_4_1 with {'spm': True}
2017-06-25 15:33:01,310 - MainThread - root - INFO - Checking Storage Pool Manager (SPM) status of the host host_mixed_1.
2017-06-25 15:33:01,319 - MainThread - root - INFO - Get host object by host_name host_mixed_1. with {'attribute': 'name'}
2017-06-25 15:33:01,598 - MainThread - art.ll_lib.hosts - INFO - check_host_spm_status - SPM Status of host host_mixed_1 is: spm
2017-06-25 15:33:01,932 - MainThread - art.hl_lib.dcs - INFO - Waiting up to 120 seconds for sd upgrade_4_0_to_4_1nfs0 to be active
2017-06-25 15:33:02,323 - MainThread - root - DEBUG - Skipping duplicate log-messages...
2017-06-25 15:33:02,790 - MainThread - art.ll_lib.hosts - INFO - Check if host host_mixed_1 has state up
2017-06-25 15:33:02,791 - MainThread - art.ll_lib.hosts - INFO - Host host_mixed_1 has state up
2017-06-25 15:33:02,792 - MainThread - art.ll_lib.disks - INFO - Waiting for status ok on disks ['vm_TestCase18337_REST_NFS_2515293925_Disk1']
2017-06-25 15:33:03,045 - MainThread - art.ll_lib.vms - INFO - Waiting until snapshots: ['snap_TestCase18337_REST_NFS_25153010280', 'snap_TestCase18337_REST_NFS_25153029011', 'snap_TestCase18337_REST_NFS_25153053172', 'snap_TestCase18337_REST_NFS_25153111653', 'snap_TestCase18337_REST_NFS_25153129804'] of vm_TestCase18337_REST_NFS_2515293925 vm are in one of following states: ['ok']
2017-06-25 15:33:06,040 - MainThread - diskattachments - INFO - Using Correlation-Id: diskattachments_update_bd2b2558-1b7d-4c85
2017-06-25 15:33:06,426 - MainThread - diskattachments - INFO - disk_attachment was updated
2017-06-25 15:33:06,432 - MainThread - art.ll_lib.disks - DEBUG - Skipping duplicate log-messages...
2017-06-25 15:33:07,393 - MainThread - rhevmtests.storage.storage_qcow2_v3.helpers - INFO - disk vm_TestCase18337_REST_NFS_2515293925_Disk1 was updated to qcow2_v3
2017-06-25 15:33:08,249 - MainThread - art.logging - ERROR - Status: failed

Comment 4 Red Hat Bugzilla Rules Engine 2017-06-25 15:46:53 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 8 Maor 2017-06-26 07:50:05 UTC
It looks like this is related to the fact that we only compare the active volume for the QCOW compat level.
You might not encountered that before since the DB result order of the volumes might have changed between the different DBs.
I will upload a patch that should solve this by updating the active volume as the last volume for amend.

Comment 9 Avihai 2017-07-09 12:07:32 UTC
verified .

Engine:
4.1.4-0.2.el7

VDSM:
4.19.20-1