Description of problem: When attempting to attach an imported FibreChannel storage domain to a Data Center, the process fails with no indication of why on the RHEV-M. Version-Release number of selected component (if applicable): rhevm-3.5.0-0.32.el6ev.noarch vdsm-4.16.8.1-8.el6ev.x86_64 How reproducible: Unknown Steps to Reproduce: 1. Import existing Fibrechannel storage domain after rebuilding RHEV-M 2. Attempt to attach imported domain to a Data Center Actual results: Failure Expected results: Successful attachment
Seems as though the storage domain contains leftover metadata from an old pool. Maor - isn't this the same issue as bug 1179899 ? If so, please provide GSS with manual steps to handle the current situation so they can unblock the customer, and let's backport this patch to zstream.
(In reply to Allon Mureinik from comment #3) > Seems as though the storage domain contains leftover metadata from an old > pool. > > Maor - isn't this the same issue as bug 1179899 ? > If so, please provide GSS with manual steps to handle the current situation > so they can unblock the customer, and let's backport this patch to zstream. no, it is the same as https://bugzilla.redhat.com/show_bug.cgi?id=1178646, which was fixed for 3.5.1
since VDSM takes a lock on the storage pool when performing an attache operation (since we also use detach in the process), we can't import and attach a Storage Domain to an uninitialized Data Center. (see [1]) Currently we have added a validation when trying to attach an imported Storage Domain to an un-initalized Data Center (see https://bugzilla.redhat.com/show_bug.cgi?id=1178646) this obstacle should be removed in a later version, once the storage pool will be removed completely in VDSM. [1] http://www.ovirt.org/Features/ImportStorageDomain#Implementation_gaps (section [6])
Allan, please try to initialize the Data Center first by attaching it a new Storage Domain and then try to import the FC Storage Domain again
So to clarify and set an action plan: 1. You cannot attach an imported domain to an unitialized DC, since the import procedure requires an active SPM. 2. The workaround for the current customer ticket is to create a storage domain and activate it in the DC. This domain will become the master domain, and an SPM will be started. Once this is done, you can attach the domain you wanted. Then, if you wish, you could disable and remove the previous master domain. 3. In 3.5.1 This behavior is made clearer by a CDA error message (see bug 1178646). 4. In 3.6.0, once we remove the SPM (see bug 1185830), we can work on the ability to attach a domain to a newly created DC. We'll use this BZ to track it. Based on these comments, I've reduced the priority.
*** Bug 1348097 has been marked as a duplicate of this bug. ***
Maor, Nir, what are your thoughts about adding a "force clear pool MD" verb as an HSM verb. (Sort of "this is a horrible idea, use it on your own perrill" kind of thing for DR situations where the user knows the original pool/SPM is dead)?
For the DR scenario, maybe we should consider having this as part of the tool, and not a "first class citizen" in the import flow.
I believe this fix was merged to the build as well
Storage pool initialization from an imported domain that was forcibly detached or destroyed is now possible. DC gets initialized successfully. 2017-09-11 18:32:15,501+03 INFO [org.ovirt.engine.core.bll.storage.pool.AddStoragePoolWithStoragesCommand] (org.ovirt.thread.EE-ManagedThreadFactory-engine-Thread-17) [de2f8944-0186-4686-adad-d4756b42984f] Domain 'f7824010-0d88-4bcc-a167-5ab389dd8825' is already attached to a different storage pool, clean the storage domain metadata. 2017-09-11 18:32:15,508+03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CleanStorageDomainMetaDataVDSCommand] (org.ovirt.thread.EE-ManagedThreadFactory-engine-Thread-17) [de2f8944-0186-4686-adad-d4756b42984f] START, CleanStorageDomainMetaDataVDSCommand(HostName = host_mixed_3, StorageDomainVdsCommandParameters:{hostId='85f7a6ac-1ea9-47cf-a4f0-4f1655e63ddb', storageDomainId='f7824010-0d88-4bcc-a167-5ab389dd8825'}), log id: 64b2ad3b 2017-09-11 18:32:41,289+03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CleanStorageDomainMetaDataVDSCommand] (org.ovirt.thread.EE-ManagedThreadFactory-engine-Thread-17) [de2f8944-0186-4686-adad-d4756b42984f] FINISH, CleanStorageDomainMetaDataVDSCommand, log id: 64b2ad3b 2017-09-11 18:32:41,289+03 INFO [org.ovirt.engine.core.bll.storage.pool.AddStoragePoolWithStoragesCommand] (org.ovirt.thread.EE-ManagedThreadFactory-engine-Thread-17) [de2f8944-0186-4686-adad-d4756b42984f] Successfully cleaned metadata for storage domain 'f7824010-0d88-4bcc-a167-5ab389dd8825'. Used: ovirt-engine-4.2.0-0.0.master.20170907100709.git14accac.el7.centos.noarch vdsm-4.20.3-20.git1fda949.el7.centos.x86_64
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Project 'ovirt-engine'/Component 'vdsm' mismatch] For more info please contact: rhv-devops
Maor, can you please provide some doctext for this bz?
Moving back to VERIFIED (comment 25)
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:1489
BZ<2>Jira Resync