Bug 1568447
Summary: | Moving StorageDomain to Maintenance releases lock twice | ||
---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Ravi Nori <rnori> |
Component: | BLL.Infra | Assignee: | Ravi Nori <rnori> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Shir Fishbain <sfishbai> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.2.0 | CC: | bugs, ebenahar, lsvaty, mperina, rnori |
Target Milestone: | ovirt-4.3.0 | Flags: | rule-engine:
ovirt-4.3+
|
Target Release: | 4.3.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ovirt-engine-4.3.0_alpha | Doc Type: | No Doc Update |
Doc Text: |
undefined
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2019-02-13 07:43:12 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Infra | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Ravi Nori
2018-04-17 14:16:28 UTC
Setting priority to high, because other flow can step in and perform action during the time between child command releases the lock and parent command finishes its execution and try to release the lock again (locks acquired in parent command cannot be released in child commands). Which kind of SD to add? What are the steps to reproduce? I was able to reproduce with NFS storage domain. Steps to Reproduce: 1. Add second storage domain to the DataCenter 2. From Storage->Storage Domains->Data2->DataCenter tab, move the Domain to Maintenance In the logs the lock release message should appear only once Verified The WARN messages don't appear in the engine log. I checked it for nfs, iscsi and gluster: 2019-01-30 10:10:41,963+02 INFO [org.ovirt.engine.core.bll.storage.domain.DeactivateStorageDomainWithOvfUpdateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-16) [ae1ffed1-1ca4-4cdd-8860-2216c556b9e0] Lock freed to object 'EngineLock:{exclusiveLocks='[9d44b450-0705-4240-b31b-28085605b562=STORAGE]', sharedLocks='[9807da5c-e4f9-4e5b-b9d1-b88fc9003d62=POOL]'}' 2019-01-30 10:10:58,435+02 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] (EE-ManagedThreadFactory-engine-Thread-159611) [] Moving domain '9d44b450-0705-4240-b31b-28085605b562' to maintenance 2019-01-30 10:10:58,442+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-159611) [] EVENT_ID: STORAGE_DOMAIN_MOVED_TO_MAINTENANCE(1,029), Storage Domain nfs_2 (Data Center golden_env_mixed) successfully moved to Maintenance as it's no longer accessed by any Host of the Data Center. 2019-01-30 10:13:18,423+02 INFO [org.ovirt.engine.core.bll.storage.domain.DeactivateStorageDomainWithOvfUpdateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [dc62cc3e-bd0e-453d-b408-9c9d2cb7e261] Lock freed to object 'EngineLock:{exclusiveLocks='[f5bee9b2-d43a-4982-b435-a27e5e51edae=STORAGE, 9807da5c-e4f9-4e5b-b9d1-b88fc9003d62=POOL]', sharedLocks=''}' 2019-01-30 10:13:21,336+02 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] (EE-ManagedThreadFactory-engine-Thread-159677) [] Adding domain 'f5bee9b2-d43a-4982-b435-a27e5e51edae' to the domains in maintenance cache 2019-01-30 10:13:29,116+02 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] (EE-ManagedThreadFactory-engine-Thread-159683) [] Moving domain 'f5bee9b2-d43a-4982-b435-a27e5e51edae' to maintenance 2019-01-30 10:13:29,123+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-159683) [] EVENT_ID: STORAGE_DOMAIN_MOVED_TO_MAINTENANCE(1,029), Storage Domain iscsi_1 (Data Center golden_env_mixed) successfully moved to Maintenance as it's no longer accessed by any Host of the Data Center. 2019-01-30 10:15:14,257+02 INFO [org.ovirt.engine.core.bll.RefreshHostCapabilitiesCommand] (ForkJoinPool-1-worker-5) [c4604bd] Refresh host capabilities finished. Lock released. Monitoring can run now for host 'host_mixed_2' from data-center 'golden_env_mixed' 2019-01-30 10:15:16,710+02 INFO [org.ovirt.engine.core.bll.storage.domain.DeactivateStorageDomainWithOvfUpdateCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [c856a8f9-64a9-4ea6-8120-f5d2c3cee9d8] Lock freed to object 'EngineLock:{exclusiveLocks='[a968711d-78aa-4bc6-bac9-b60d4afc0542=STORAGE]', sharedLocks='[9807da5c-e4f9-4e5b-b9d1-b88fc9003d62=POOL]'}' 2019-01-30 10:15:21,662+02 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] (EE-ManagedThreadFactory-engine-Thread-159736) [644b9bf3] Adding domain 'a968711d-78aa-4bc6-bac9-b60d4afc0542' to the domains in maintenance cache 2019-01-30 10:15:31,485+02 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] (EE-ManagedThreadFactory-engine-Thread-159742) [] Moving domain 'a968711d-78aa-4bc6-bac9-b60d4afc0542' to maintenance 2019-01-30 10:15:31,492+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-159742) [] EVENT_ID: STORAGE_DOMAIN_MOVED_TO_MAINTENANCE(1,029), Storage Domain test_gluster_1 (Data Center golden_env_mixed) successfully moved to Maintenance as it's no longer accessed by any Host of the Data Center. ovirt-engine-4.3.0-0.8.rc2.el7.noarch vdsm-4.30.6-1.el7ev.x86_64 This bugzilla is included in oVirt 4.3.0 release, published on February 4th 2019. Since the problem described in this bug report should be resolved in oVirt 4.3.0 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |