Bug 1463075 - several VDSM restarts occurs after (1+ minute) of an initiated vdsm kill +SD deactivation + Setting config value OvfUpdateIntervalInMinutes to 1 min
Summary: several VDSM restarts occurs after (1+ minute) of an initiated vdsm kill +SD ...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.1.3.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Maor
QA Contact: Raz Tamir
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-20 05:31 UTC by Avihai
Modified: 2022-06-27 12:32 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-12 13:39:58 UTC
oVirt Team: Storage
Embargoed:
sbonazzo: ovirt-4.2-


Attachments (Terms of Use)
engine & vdsm logs (6.46 MB, application/x-gzip)
2017-06-20 05:31 UTC, Avihai
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-46715 0 None None None 2022-06-27 12:32:17 UTC

Description Avihai 2017-06-20 05:31:53 UTC
Created attachment 1289397 [details]
engine & vdsm logs

Description of problem:
Following Bug 1459532 we saw deactivation failure issue occurred due to OvfUpdateIntervalInMinutes was 1Min  & we decided to open a new bug on the following issue:
several VDSM restarts occur after (1+ minute) of an initiated vdsm kill was done .

Version-Release number of selected component (if applicable):
Engine:
ovirt-engine-4.1.3.1-0.1.el7.noarch

VDSM:
4.19.17-1

How reproducible:
Untill now this only occured once , will try to reproduce with OvfUpdateIntervalInMinutes 1Min & update


Steps to Reproduce:
1. Create DC + cluster on v3
2. Upgrade the Cluster to ver 4 and kill the SPM during the operation
3. Upgrade the Cluster again
4. Deactivate storage domain 


Actual results:
SD deactivation failed which is expected as other OVF update tasks are running BUT ALSO several VDSM restarts occurs which is not expected


Expected results:
SD deactivation failed which is expected as other OVF update tasks are running BUT NO VDSM restarts should occur.

Additional info:
Automation output shows remove failed but the actual cause was that deactivation did not occur to SD  domain upgrade_4_0_to_4_1nfs0 , see engine & VDSM logs :

From Engine you can see DEACTIVATION FAILED :
2017-06-05 02:13:30,757+03 ERROR [org.ovirt.engine.core.bll.storage.domain.DeactivateStorageDomainCommand] (DefaultQuartzScheduler3) [998764a] Aborting execution due to failure to stop SPM
2017-06-05 02:13:30,769+03 INFO  [org.ovirt.engine.core.bll.storage.domain.DeactivateStorageDomainCommand] (DefaultQuartzScheduler3) [998764a] Command [id=98559627-f344-438d-b479-b1b4b9845966]: Compensating CHAN
GED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePool; snapshot: EntityStatusSnapshot:{id='4a4485b7-181b-4ca3-a582-b330fa886500', status='Up'}.
2017-06-05 02:13:30,771+03 INFO  [org.ovirt.engine.core.bll.storage.domain.DeactivateStorageDomainCommand] (DefaultQuartzScheduler3) [998764a] Command [id=98559627-f344-438d-b479-b1b4b9845966]: Compensating CHAN
GED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: EntityStatusSnapshot:{id='StoragePoolIsoMapId:{storagePoolId='4a4485b7-181b-4ca3-a582-b330fa886500', storageId='34d2b
bea-f998-4e1d-b132-66cfca9d0b2e'}', status='Unknown'}.
2017-06-05 02:13:30,778+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler3) [998764a] EVENT_ID: USER_DEACTIVATE_STORAGE_DOMAIN_FAILED(969), Correlation ID: 998764a, Job ID: 0e1a8f21-d056-454f-a5d1-857766285ad5, Call Stack: null, Custom Event ID: -1, Message: Failed to deactivate Storage Domain upgrade_4_0_to_4_1nfs0 (Data Center dc_upgrade_4_0_to_4_1).
2017-06-05 02:13:30,782+03 INFO  [org.ovirt.engine.core.bll.storage.domain.DeactivateStorageDomainWithOvfUpdateCommand] (DefaultQuartzScheduler3) [998764a] Lock freed to object 'EngineLock:{exclusiveLocks='[34d2bbea-f998-4e1d-b132-66cfca9d0b2e=<STORAGE, ACTION_TYPE_FAILED_OBJECT_LOCKED>, 4a4485b7-181b-4ca3-a582-b330fa886500=<POOL, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
5ad5, Call Stack: null, Custom Event ID: -1, Message: Failed to deactivate Storage Domain upgrade_4_0_to_4_1nfs0 (Data Center dc_upgrade_4_0_to_4_1).

 *** vdsm was killed intentionally ONLY ONCE at 2017-06-05 02:12:26 , while in vdsm log we see total of 3 vdsm restarts appear , the latter 2 restarts are the issue here:
 
 VDSM log:
 Expected after kill :
 1) 2017-06-05 02:12:33,104+0300 INFO  (MainThread) [vds] (PID: 61977) I am
 the actual vdsm 4.19.17-1.el7ev lynx12.qa.lab.tlv.redhat.com
 (3.10.0-514.21.1.el7.x86_64) (vdsm:145)
 
 Not expected :
 2) 2017-06-05 02:18:41,361+0300 INFO  (MainThread) [vds] (PID: 62794) I am
 the actual vdsm 4.19.17-1.el7ev lynx12.qa.lab.tlv.redhat.com
 (3.10.0-514.21.1.el7.x86_64) (vdsm:145)
 
 Not expected:
 3) 2017-06-05 02:21:01,646+0300 INFO  (MainThread) [vds] (PID: 63539) I am
 the actual vdsm 4.19.17-1.el7ev lynx12.qa.lab.tlv.redhat.com
(3.10.0-514.21.1.el7.x86_64) (vdsm:145)


automation Full timeline :
2017-06-05 02:10:49,583 - MainThread - art.logging - DEBUG - Skipping duplicate log-messages...
2017-06-05 02:10:49,584 - MainThread - art.logging - INFO - Test class description: 1. Create DC + cluster on v3
2017-06-05 02:10:49,584 - MainThread - art.logging - INFO - Test class description: 2. Upgrade the Cluster to ver 4 and kill the SPM during the operation
2017-06-05 02:10:49,584 - MainThread - art.logging - INFO - Test class description: 3. Upgrade the Cluster again
2017-06-05 02:10:49,584 - MainThread - art.logging - INFO - SETUP <TestCaseFunction 'test_verify_new_domain_version'>
2017-06-05 02:10:49,584 - MainThread - art.logging - INFO - PACKAGE SETUP: Entry(/var/lib/jenkins/workspace/rhv-4.1v4-ge-runner-tier3/automation/ART/art/tests/rhevmtests/storage/storage_qcow2_v3)
2017-06-05 02:10:49,692 - MainThread - rhevmtests.helpers - INFO - Retrieve all Storage domains
2017-06-05 02:10:49,824 - MainThread - rhevmtests.helpers - INFO - The storage domains names in engine: ['export_domain', 'iscsi_0', 'iscsi_1', 'iscsi_2', 'ISO_DOMAIN', 'netappISO', 'nfs_0', 'nfs_1', 'nfs_2', 'rhevm-qe-infra-glance', 'test_gluster_0', 'test_gluster_1', 'test_gluster_2']
2017-06-05 02:10:49,825 - MainThread - rhevmtests.helpers - INFO - The GE storage domains names: ['rhevm-qe-infra-glance', 'ovirt-image-repository', 'nfs_0', 'nfs_1', 'nfs_2', 'iscsi_0', 'iscsi_1', 'iscsi_2', 'test_gluster_0', 'test_gluster_1', 'test_gluster_2', 'rhevm-qe-infra-cinder', 'export_domain', 'netappISO', 'ISO_DOMAIN']
2017-06-05 02:10:50,617 - MainThread - test_utils - INFO - All tasks are gone
2017-06-05 02:10:50,617 - MainThread - pytest.art.matrix - INFO - The storage type switched to nfs
2017-06-05 02:10:50,623 - MainThread - root - INFO - Get SPM host from the list of hosts host_mixed_1,host_mixed_2,host_mixed_3
2017-06-05 02:10:50,627 - MainThread - root - INFO - Checking Storage Pool Manager (SPM) status of the host host_mixed_1.
2017-06-05 02:10:50,631 - MainThread - root - INFO - Get host object by host_name host_mixed_1. with {'attribute': 'name'}
2017-06-05 02:10:50,714 - MainThread - art.ll_lib.hosts - INFO - check_host_spm_status - SPM Status of host host_mixed_1 is: none
2017-06-05 02:10:50,715 - MainThread - root - ERROR - Failed to checking storage pool manager (spm) status of the host host_mixed_1.
2017-06-05 02:10:50,718 - MainThread - root - INFO - Checking Storage Pool Manager (SPM) status of the host host_mixed_2.
2017-06-05 02:10:50,722 - MainThread - root - INFO - Get host object by host_name host_mixed_2. with {'attribute': 'name'}
2017-06-05 02:10:50,806 - MainThread - art.ll_lib.hosts - INFO - check_host_spm_status - SPM Status of host host_mixed_2 is: spm
2017-06-05 02:10:50,811 - MainThread - root - INFO - Get any not SPM host from the list of hosts host_mixed_1,host_mixed_2,host_mixed_3 in the expected state with {'cluster_name': 'golden_env_mixed_1'}
2017-06-05 02:10:50,815 - MainThread - root - INFO - Get host object by host_name host_mixed_1. with {'attribute': 'name'}
2017-06-05 02:10:50,906 - MainThread - root - INFO - Get host object by host_name host_mixed_2. with {'attribute': 'name'}
2017-06-05 02:10:50,996 - MainThread - root - INFO - Get host object by host_name host_mixed_3. with {'attribute': 'name'}
2017-06-05 02:10:51,149 - MainThread - root - INFO - Checking Storage Pool Manager (SPM) status of the host host_mixed_1.
2017-06-05 02:10:51,153 - MainThread - root - INFO - Get host object by host_name host_mixed_1. with {'attribute': 'name'}
2017-06-05 02:10:51,230 - MainThread - art.ll_lib.hosts - INFO - check_host_spm_status - SPM Status of host host_mixed_1 is: none
2017-06-05 02:10:51,230 - MainThread - root - ERROR - Failed to checking storage pool manager (spm) status of the host host_mixed_1.
2017-06-05 02:10:51,230 - MainThread - art.ll_lib.dcs - INFO - Add datacenter dc_upgrade_4_0_to_4_1 with {'version': '4.0', 'name': 'dc_upgrade_4_0_to_4_1'}
2017-06-05 02:10:51,231 - MainThread - datacenters - INFO - Using Correlation-Id: datacenters_create_75a6bf29-041f-41b6
2017-06-05 02:10:51,428 - MainThread - datacenters - INFO - New entity was added
2017-06-05 02:10:51,432 - MainThread - datacenters - INFO - Property 'data_center->name' has correct value: dc_upgrade_4_0_to_4_1
2017-06-05 02:10:51,432 - MainThread - datacenters - INFO - Property 'data_center->local' has correct value: false
2017-06-05 02:10:51,432 - MainThread - datacenters - INFO - Property 'data_center->version->major' has correct value: 4
2017-06-05 02:10:51,432 - MainThread - datacenters - INFO - Property 'data_center->version->minor' has correct value: 0
2017-06-05 02:10:51,513 - MainThread - datacenters - INFO - Validated supported versions of data center dc_upgrade_4_0_to_4_1
2017-06-05 02:10:51,513 - MainThread - art.rhevm_api.tests_lib.low_level.clusters - INFO - Create cluster cluster_upgrade_4_0_to_4_1 with {'data_center': 'dc_upgrade_4_0_to_4_1', 'name': 'cluster_upgrade_4_0_to_4_1', 'version': '4.0', 'cpu': 'Intel Conroe Family'}
2017-06-05 02:10:51,602 - MainThread - clusters - INFO - Using Correlation-Id: clusters_create_7f02a10c-635f-404b
2017-06-05 02:10:51,947 - MainThread - clusters - INFO - New entity was added
2017-06-05 02:10:51,952 - MainThread - clusters - INFO - Property 'cluster->data_center->id' has correct value: 4a4485b7-181b-4ca3-a582-b330fa886500
2017-06-05 02:10:51,952 - MainThread - clusters - INFO - Property 'cluster->cpu->type_' has correct value: Intel Conroe Family
2017-06-05 02:10:51,952 - MainThread - clusters - INFO - Property 'cluster->version->major' has correct value: 4
2017-06-05 02:10:51,952 - MainThread - clusters - INFO - Property 'cluster->version->minor' has correct value: 0
2017-06-05 02:10:51,953 - MainThread - clusters - INFO - Property 'cluster->name' has correct value: cluster_upgrade_4_0_to_4_1
2017-06-05 02:10:51,964 - MainThread - root - INFO - Switch host host_mixed_1 to different cluster cluster_upgrade_4_0_to_4_1 with {'activate': True}
2017-06-05 02:10:51,968 - MainThread - root - INFO - Checks if host host_mixed_1 is in maintenance state
2017-06-05 02:10:51,972 - MainThread - root - INFO - Returns host host_mixed_1 status
2017-06-05 02:10:51,976 - MainThread - root - INFO - Get host object by host_name host_mixed_1. with {'attribute': 'name'}
2017-06-05 02:10:52,074 - MainThread - root - INFO - Deactivate the host host_mixed_1. with {'expected_status': 'maintenance'}
2017-06-05 02:10:52,077 - MainThread - root - INFO - Get host object by host_name host_mixed_1. with {'attribute': 'name'}
2017-06-05 02:10:52,167 - MainThread - root - INFO - Get host hosted-engine object with {'host_name': 'host_mixed_1'}
2017-06-05 02:10:52,244 - MainThread - root - ERROR - Failed to get host hosted-engine object with {'host_name': 'host_mixed_1'}
2017-06-05 02:10:52,245 - MainThread - hosts - INFO - Using Correlation-Id: hosts_syncAction_cf113b81-3b77-4103
2017-06-05 02:11:02,706 - MainThread - hosts - INFO - host status is 'maintenance'
2017-06-05 02:11:02,711 - MainThread - root - INFO - Update properties of host host_mixed_1 (provided in parameters) with {'cluster': 'cluster_upgrade_4_0_to_4_1'}
2017-06-05 02:11:02,716 - MainThread - root - INFO - Get host object by host_name host_mixed_1. with {'attribute': 'name'}
2017-06-05 02:11:02,900 - MainThread - hosts - INFO - Using Correlation-Id: hosts_update_0574ea9e-99f0-41a1
2017-06-05 02:11:08,425 - MainThread - hosts - INFO - host was updated
2017-06-05 02:11:08,431 - MainThread - hosts - INFO - Property 'host->cluster->id' has correct value: 696dc642-3da5-4b8f-9662-10eee447af66
2017-06-05 02:11:08,441 - MainThread - root - INFO - Activate host host_mixed_1 (set status to UP) with {'wait': True}
2017-06-05 02:11:08,445 - MainThread - root - INFO - Get host object by host_name host_mixed_1. with {'attribute': 'name'}
2017-06-05 02:11:08,527 - MainThread - hosts - INFO - Using Correlation-Id: hosts_syncAction_26f9a20b-a7aa-4bba
2017-06-05 02:11:18,765 - MainThread - hosts - INFO - host status is 'up'
2017-06-05 02:11:18,769 - MainThread - root - INFO - Get SPM host from the list of hosts host_mixed_1,host_mixed_2,host_mixed_3
2017-06-05 02:11:18,773 - MainThread - root - INFO - Checking Storage Pool Manager (SPM) status of the host host_mixed_1.
2017-06-05 02:11:18,776 - MainThread - root - INFO - Get host object by host_name host_mixed_1. with {'attribute': 'name'}
2017-06-05 02:11:18,854 - MainThread - art.ll_lib.hosts - INFO - check_host_spm_status - SPM Status of host host_mixed_1 is: none
2017-06-05 02:11:18,855 - MainThread - root - ERROR - Failed to checking storage pool manager (spm) status of the host host_mixed_1.
2017-06-05 02:11:18,858 - MainThread - root - INFO - Checking Storage Pool Manager (SPM) status of the host host_mixed_2.
2017-06-05 02:11:18,862 - MainThread - root - INFO - Get host object by host_name host_mixed_2. with {'attribute': 'name'}
2017-06-05 02:11:18,939 - MainThread - art.ll_lib.hosts - INFO - check_host_spm_status - SPM Status of host host_mixed_2 is: spm
2017-06-05 02:11:18,939 - MainThread - art.hl_lib.sds - INFO - Add NFS storage domain upgrade_4_0_to_4_1nfs0 to datacenter dc_upgrade_4_0_to_4_1
2017-06-05 02:11:19,029 - MainThread - root - INFO - Get Host host_mixed_2 compatibility version
2017-06-05 02:11:19,043 - MainThread - root - INFO - Get host object by host_name host_mixed_2. with {'attribute': 'name'}
2017-06-05 02:11:19,197 - MainThread - root - INFO - Get Cluster golden_env_mixed_1 compatibility version
2017-06-05 02:11:19,261 - MainThread - storagedomains - INFO - Using Correlation-Id: storagedomains_create_c5bdad2f-f5fa-4249
2017-06-05 02:11:24,089 - MainThread - storagedomains - INFO - New entity was added
2017-06-05 02:11:24,091 - MainThread - storagedomains - INFO - Property 'storage_domain->storage->address' has correct value: vserver-nas01.qa.lab.tlv.redhat.com
2017-06-05 02:11:24,091 - MainThread - storagedomains - INFO - Property 'storage_domain->storage->path' has correct value: /nas01/ge_4_unused_nfs_0
2017-06-05 02:11:24,091 - MainThread - storagedomains - INFO - Property 'storage_domain->storage->type_' has correct value: nfs
2017-06-05 02:11:24,091 - MainThread - storagedomains - INFO - Property 'storage_domain->storage_format' has correct value: v3
2017-06-05 02:11:24,092 - MainThread - storagedomains - INFO - Property 'storage_domain->name' has correct value: upgrade_4_0_to_4_1nfs0
2017-06-05 02:11:24,092 - MainThread - storagedomains - INFO - Attribute: import___ changed to: import
2017-06-05 02:11:24,092 - MainThread - storagedomains - INFO - Property 'storage_domain->type_' has correct value: data
2017-06-05 02:11:24,313 - MainThread - storagedomains - INFO - Using Correlation-Id: storagedomains_create_b5f9c16c-84a9-4441
2017-06-05 02:12:20,614 - MainThread - storagedomains - INFO - New entity was added
2017-06-05 02:12:20,616 - MainThread - storagedomains - INFO - Property 'storage_domain->id' has correct value: 34d2bbea-f998-4e1d-b132-66cfca9d0b2e
2017-06-05 02:12:20,616 - MainThread - storagedomains - INFO - Attribute: import___ changed to: import
2017-06-05 02:12:20,728 - MainThread - datacenters - INFO - data_center status is 'up'
2017-06-05 02:12:20,953 - MainThread - storagedomains - WARNING - Storage domain upgrade_4_0_to_4_1nfs0 already activated
2017-06-05 02:12:20,954 - MainThread - art.ll_lib.jobs - INFO - Waiting for jobs ['Adding .* Storage Domain .*', 'Activating Storage Domain .* on Data Center .*']
2017-06-05 02:12:21,037 - MainThread - art.ll_lib.jobs - INFO - Active jobs: []
2017-06-05 02:12:21,108 - MainThread - art.ll_lib.jobs - INFO - JOB 'Adding NFS Storage Domain upgrade_4_0_to_4_1nfs0 to Data Center <UNKNOWN>' TOOK 2.442 seconds
2017-06-05 02:12:21,183 - MainThread - art.ll_lib.jobs - INFO - All jobs are gone
2017-06-05 02:12:22,126 - MainThread - test_utils - INFO - All tasks are gone
2017-06-05 02:12:22,127 - MainThread - art.logging - INFO - ================================================================================2017-06-05 02:12:22,128 - MainThread - art.logging - INFO - Test Name: rhevmtests.storage.storage_qcow2_v3.test_qcow2_v3.TestCase18307_REST_NFS.test_verify_new_domain_version
2017-06-05 02:12:22,128 - MainThread - art.logging - INFO - Iteration number: 264
2017-06-05 02:12:22,132 - MainThread - root - INFO - Update cluster cluster_upgrade_4_0_to_4_1 with {'version': '4.1'}
2017-06-05 02:12:22,192 - MainThread - clusters - INFO - Using Correlation-Id: clusters_update_0d44bc90-8d06-4417
2017-06-05 02:12:22,400 - MainThread - clusters - INFO - cluster was updated
2017-06-05 02:12:22,402 - MainThread - clusters - INFO - Property 'cluster->version->major' has correct value: 4
2017-06-05 02:12:22,402 - MainThread - clusters - INFO - Property 'cluster->version->minor' has correct value: 1
2017-06-05 02:12:22,460 - MainThread - art.ll_lib.dcs - INFO - Update datacenter dc_upgrade_4_0_to_4_1 with {'version': '4.1'}
2017-06-05 02:12:22,461 - MainThread - datacenters - INFO - Using Correlation-Id: datacenters_update_6d70dd15-64a6-4de1
2017-06-05 02:12:25,024 - MainThread - datacenters - INFO - data_center was updated
2017-06-05 02:12:25,025 - MainThread - datacenters - INFO - Property 'data_center->version->major' has correct value: 4
2017-06-05 02:12:25,026 - MainThread - datacenters - INFO - Property 'data_center->version->minor' has correct value: 1
2017-06-05 02:12:25,030 - MainThread - root - INFO - Get host resource by host_name host_mixed_1
2017-06-05 02:12:25,030 - MainThread - VDS - INFO - [10.35.66.4] Executing command pgrep vdsm
2017-06-05 02:12:25,624 - MainThread - VDS - INFO - [10.35.66.4] Executing command pgrep super
2017-06-05 02:12:26,204 - MainThread - art.ll_lib.hosts - INFO - kill VDSM pid: 10813 on host 10.35.66.4
2017-06-05 02:12:26,205 - MainThread - VDS - INFO - [10.35.66.4] Executing command kill -9 10813
2017-06-05 02:12:26,751 - MainThread - root - INFO - Get host object by host_name host_mixed_1. with {'attribute': 'name'}
2017-06-05 02:12:26,914 - MainThread - art.ll_lib.hosts - INFO - Check if host host_mixed_1 has state connecting
2017-06-05 02:12:37,004 - MainThread - art.ll_lib.hosts - DEBUG - Skipping duplicate log-messages...
2017-06-05 02:12:37,004 - MainThread - art.ll_lib.hosts - INFO - Host host_mixed_1 has state connecting
2017-06-05 02:12:37,067 - MainThread - datacenters - INFO - Waiting for query `name=dc_upgrade_4_0_to_4_1 and status=up` and event_id None up to 180 seconds,sampling every 10 second.
2017-06-05 02:13:17,382 - MainThread - root - INFO - Get host object by host_name host_mixed_1. with {'attribute': 'name'}
2017-06-05 02:13:17,535 - MainThread - art.ll_lib.hosts - INFO - Check if host host_mixed_1 has state up
2017-06-05 02:13:17,535 - MainThread - art.ll_lib.hosts - INFO - Host host_mixed_1 has state up
2017-06-05 02:13:17,539 - MainThread - root - INFO - Update cluster cluster_upgrade_4_0_to_4_1 with {'version': '4.1'}
2017-06-05 02:13:17,592 - MainThread - clusters - INFO - Using Correlation-Id: clusters_update_fecde6da-ede0-4ebc
2017-06-05 02:13:17,690 - MainThread - clusters - INFO - cluster was updated
2017-06-05 02:13:17,691 - MainThread - clusters - INFO - Property 'cluster->version->major' has correct value: 4
2017-06-05 02:13:17,692 - MainThread - clusters - INFO - Property 'cluster->version->minor' has correct value: 1
2017-06-05 02:13:17,776 - MainThread - art.ll_lib.dcs - INFO - Update datacenter dc_upgrade_4_0_to_4_1 with {'version': '4.1'}
2017-06-05 02:13:17,777 - MainThread - datacenters - INFO - Using Correlation-Id: datacenters_update_fa1b6ec0-5608-4a27
2017-06-05 02:13:17,866 - MainThread - datacenters - INFO - data_center was updated
2017-06-05 02:13:17,867 - MainThread - datacenters - INFO - Property 'data_center->version->major' has correct value: 4
2017-06-05 02:13:17,867 - MainThread - datacenters - INFO - Property 'data_center->version->minor' has correct value: 1
2017-06-05 02:13:18,111 - MainThread - rhevmtests.storage.storage_qcow2_v3.test_qcow2_v3 - INFO - Checking that upgrade_4_0_to_4_1nfs0 was upgraded: True2017-06-05 02:13:18,112 - MainThread - art.logging - INFO - ================================================================================
2017-06-05 02:13:18,113 - MainThread - art.logging - INFO - TEARDOWN <TestCaseFunction 'test_verify_new_domain_version'>
2017-06-05 02:13:18,113 - MainThread - pytest.art.matrix - INFO - The storage type reset to None
2017-06-05 02:13:18,490 - MainThread - rhevmtests.storage.helpers - INFO - Checking if domain upgrade_4_0_to_4_1nfs0 is active in dc dc_upgrade_4_0_to_4_1
2017-06-05 02:13:18,490 - MainThread - storagedomains - DEBUG - Skipping duplicate log-messages...
2017-06-05 02:13:18,604 - MainThread - storagedomains - INFO - Domain upgrade_4_0_to_4_1nfs0 in dc dc_upgrade_4_0_to_4_1 is active
2017-06-05 02:13:18,604 - MainThread - storagedomains - INFO - Checking if domain upgrade_4_0_to_4_1nfs0 is active in dc dc_upgrade_4_0_to_4_1
2017-06-05 02:13:18,725 - MainThread - storagedomains - INFO - Domain upgrade_4_0_to_4_1nfs0 in dc dc_upgrade_4_0_to_4_1 is active
2017-06-05 02:13:19,533 - MainThread - test_utils - INFO - All tasks are gone
2017-06-05 02:13:19,658 - MainThread - storagedomains - INFO - Deactivating domain upgrade_4_0_to_4_1nfs0 in data center dc_upgrade_4_0_to_4_1
2017-06-05 02:13:19,659 - MainThread - storagedomains - INFO - Using Correlation-Id: storagedomains_syncAction_84fe63ed-a1b9-4378
2017-06-05 02:18:24,398 - MainThread - storagedomains - INFO - Removing storage domain upgrade_4_0_to_4_1nfs0
2017-06-05 02:18:24,523 - MainThread - storagedomains - INFO - Using Correlation-Id: storagedomains_delete_72f4c775-6cf0-442d
2017-06-05 02:18:25,920 - MainThread - api_utils - ERROR - Failed to delete element NOT as expected:
	Status: 409
	Reason: Conflict
	Detail: [Cannot remove Storage.
 The Storage Domain metadata indicates it is already attached to a Data Center hence cannot be formatted.
 To remove the Storage Domain one should either remove it without the format option or attach it to an existing Data Center, detach it, and try again.]

2017-06-05 02:18:25,920 - MainThread - storagedomains - ERROR - Response code is not valid, expected is: [200, 202, 204], actual is: 409


Automation Test:
rhevmtests.storage.storage_qcow2_v3.test_qcow2_v3.TestCase18307_REST_NFS.test_verify_new_domain_version

Comment 1 Maor 2017-06-20 21:28:06 UTC
Can you please to try to reproduce this again to see if that is indeed consistent.
If that does reproduce please attach sanlock and messages logs which should be located in /var/log

Comment 2 Maor 2017-06-20 21:34:31 UTC
I Changed the summary to also indicate the change of the config value OvfUpdateIntervalInMinutes to 1 min.
If you plan to reproduce it without the change of the config value (probably better, so we can simulate a genuine user experience) I will change the summary back again.

Comment 3 Allon Mureinik 2017-06-21 05:40:40 UTC
(In reply to Maor from comment #2)
> I Changed the summary to also indicate the change of the config value
> OvfUpdateIntervalInMinutes to 1 min.
> If you plan to reproduce it without the change of the config value (probably
> better, so we can simulate a genuine user experience) I will change the
> summary back again.

The question here is whether it only reproduces with a 1-minute interval. If so, it's probably not an interesting usecase, and while Avihai is correct and it IS a bug, we probably won't invest the effort in solving it.

Comment 4 Avihai 2017-06-21 07:52:34 UTC
(In reply to Allon Mureinik from comment #3)
> (In reply to Maor from comment #2)
> > I Changed the summary to also indicate the change of the config value
> > OvfUpdateIntervalInMinutes to 1 min.
> > If you plan to reproduce it without the change of the config value (probably
> > better, so we can simulate a genuine user experience) I will change the
> > summary back again.
> 
> The question here is whether it only reproduces with a 1-minute interval. If
> so, it's probably not an interesting usecase, and while Avishai is correct
> and it IS a bug, we probably won't invest the effort in solving it.

Allon, so far I saw this issue only when running 1-minute interval only.

If you decide that this is not interesting(does not point to a bigger issue or can not happen over a large period of time at a costumer site) even if this issue reproduces with 1-minute interval please close this bug with WONTFIX.

If this is interesting even if its running 1-minute interval than please change it back to NEEDINFO & I'll reproduce.


Note You need to log in before you can comment on or make changes to this bug.